"Degenerate" in this context means taken to an extreme level so, for example, a straightline is a degenerate version of a triangle. The degenerate baseline hazard seems to me to be one where its obvious given the assumptions and definitions.
$$h_0(t_l|t=s) = {1}/{∑↙{t_j >= t_l} \exp(X_j \beta(s))}$$.
Rearranging this gives
$${∑↙{t_j >= t_l} h_0(t_l|t=s) \exp(X_j \beta(s))}=1$$.
Which says, the sum of the hazards for each case in the risk set just after time $t_l$ (i.e. good as $t_l$ really) is one. By definition there is an event at time $t_l$. We know that there are no ties so there is only one. So, the summation is when there are at least one but no more than one events at time $t_l$. We don't need to consider the cases when there may be more than one event at that time or the probabilities of there not being an event, leaving us with a simple addition.
Sunday, September 30, 2012
Friday, September 28, 2012
Landmarking
Sold as an easy but less revealing alternative to the multistate model approach, the landmarking approach picks a grid of time points and, using the risk set at that time, calculates the Cox regression upto some set time horizon. Some function of the fitted betas in the additive model is used to link the separate fits e.g. a linear combination. I've been going off this paper.
As with the etm, I've just used some sample data from the IMPACT clinical model to try this method out.
So far, I've used R to produce the Cox regressions at each landmark point but the paper then generated predictions of the survival probabilities from these point by estimating the baseline hazard (and so baseline cumulative hazard) to go with the regression parameters.
I thought I'd run the multistate model code with the RR/hazard adjustments for interventions from the IMPACT paper and try and recreate the same figures. So these are the proportions of incident cases that die from each outcome. Then I could repeat this, with my code, at different landmark time points and see how the hazard ratios change.
As with the etm, I've just used some sample data from the IMPACT clinical model to try this method out.
So far, I've used R to produce the Cox regressions at each landmark point but the paper then generated predictions of the survival probabilities from these point by estimating the baseline hazard (and so baseline cumulative hazard) to go with the regression parameters.
I thought I'd run the multistate model code with the RR/hazard adjustments for interventions from the IMPACT paper and try and recreate the same figures. So these are the proportions of incident cases that die from each outcome. Then I could repeat this, with my code, at different landmark time points and see how the hazard ratios change.
Monday, September 17, 2012
CHD multi-state model transition probabilities
I've been using the etm packeage in R to produce the empirical transition probabilities using the CHD simulation data from IMPACT. These are some of the output plots

where (because of space):
1="AMI"
2="CA",
3="Early HF",
4="Healthy"
5="MI Recur",
6="MI Surv",
7="SD",
8="Severe HF",
9="UA",
10="CHD Death",
11="Non CHD Death"
Below is the empirical transition matrix for 60->90 year olds i.e. $\widehat{P}(60,90)$,
where (because of space):
1="AMI"
2="CA",
3="Early HF",
4="Healthy"
5="MI Recur",
6="MI Surv",
7="SD",
8="Severe HF",
9="UA",
10="CHD Death",
11="Non CHD Death"
Below is the empirical transition matrix for 60->90 year olds i.e. $\widehat{P}(60,90)$,
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | |
| 1 | 0 | 0 | 0 | 0 | 0 | 0.0317885 | 0 | 0 | 0 | 0.545821 | 0.4223904 |
| 2 | 0 | 0.0980707 | 0 | 0 | 0 | 0.0317717 | 0 | 0 | 0 | 0.3690474 | 0.5011102 |
| 3 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 4 | 0 | 0.0265604 | 0 | 0 | 0 | 0.0234626 | 0 | 0 | 0 | 0.6691671 | 0.2808099 |
| 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.8835166 | 0.1164834 |
| 6 | 0 | 0 | 0 | 0 | 0 | 0.0317885 | 0 | 0 | 0 | 0.545821 | 0.4223904 |
| 7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
| 8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
| 9 | 0 | 0 | 0 | 0 | 0 | 0.0327666 | 0 | 0 | 0 | 0.5336655 | 0.4335679 |
| 10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
| 11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Sunday, September 16, 2012
Product Integrals
I've just come across the term product integral whilst reading a survival analysis paper. I'm surprised that it's the first time I've seen this but it seems that the idea went out of fashion and has only really been promoted in survival analysis circles because of the usefulness linking cumulative hazards and Kaplan-Meier estimates.
The idea is simple, especially when you know what regular, run-of-the-mill integration is. Where (sum) integration is the asymptotic limit of the sum of smaller and smaller intevals beneath a curve i.e. the continuous analogue of a summation, then the product integral is the product of smaller and smaller powers so these values approach 1 from above i.e. this is the analogue of taking products instead of sums.
The current notation I've seen a lot of was proposed by Gill and Johanson. I found this article by Gill useful in explaining what's going on.
The idea is simple, especially when you know what regular, run-of-the-mill integration is. Where (sum) integration is the asymptotic limit of the sum of smaller and smaller intevals beneath a curve i.e. the continuous analogue of a summation, then the product integral is the product of smaller and smaller powers so these values approach 1 from above i.e. this is the analogue of taking products instead of sums.
The current notation I've seen a lot of was proposed by Gill and Johanson. I found this article by Gill useful in explaining what's going on.
Subscribe to:
Posts (Atom)