19 research outputs found

    Matched Ascertainment of Informative Families for Complex Genetic Modelling

    Get PDF
    Family data are used extensively in quantitative genetic studies to disentangle the genetic and environmental contributions to various diseases. Many family studies based their analysis on population-based registers containing a large number of individuals composed of small family units. For binary trait analyses, exact marginal likelihood is a common approach, but, due to the computational demand of the enormous data sets, it allows only a limited number of effects in the model. This makes it particularly difficult to perform joint estimation of variance components for a binary trait and the potential confounders. We have developed a data-reduction method of ascertaining informative families from population-based family registers. We propose a scheme where the ascertained families match the full cohort with respect to some relevant statistics, such as the risk to relatives of an affected individual. The ascertainment-adjusted analysis, which we implement using a pseudo-likelihood approach, is shown to be efficient relative to the analysis of the whole cohort and robust to mis-specification of the random effect distribution

    Analysis and design of randomised clinical trials involving competing risks endpoints

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In randomised clinical trials involving time-to-event outcomes, the failures concerned may be events of an entirely different nature and as such define a classical competing risks framework. In designing and analysing clinical trials involving such endpoints, it is important to account for the competing events, and evaluate how each contributes to the overall failure. An appropriate choice of statistical model is important for adequate determination of sample size.</p> <p>Methods</p> <p>We describe how competing events may be summarised in such trials using cumulative incidence functions and Gray's test. The statistical modelling of competing events using proportional cause-specific and subdistribution hazard functions, and the corresponding procedures for sample size estimation are outlined. These are illustrated using data from a randomised clinical trial (SQNP01) of patients with advanced (non-metastatic) nasopharyngeal cancer.</p> <p>Results</p> <p>In this trial, treatment has no effect on the competing event of loco-regional recurrence. Thus the effects of treatment on the hazard of distant metastasis were similar via both the cause-specific (unadjusted <it>csHR </it>= 0.43, 95% CI 0.25 - 0.72) and subdistribution (unadjusted <it>subHR </it>0.43; 95% CI 0.25 - 0.76) hazard analyses, in favour of concurrent chemo-radiotherapy followed by adjuvant chemotherapy. Adjusting for nodal status and tumour size did not alter the results. The results of the logrank test (<it>p </it>= 0.002) comparing the cause-specific hazards and the Gray's test (<it>p </it>= 0.003) comparing the cumulative incidences also led to the same conclusion. However, the subdistribution hazard analysis requires many more subjects than the cause-specific hazard analysis to detect the same magnitude of effect.</p> <p>Conclusions</p> <p>The cause-specific hazard analysis is appropriate for analysing competing risks outcomes when treatment has no effect on the cause-specific hazard of the competing event. It requires fewer subjects than the subdistribution hazard analysis for a similar effect size. However, if the main and competing events are influenced in opposing directions by an intervention, a subdistribution hazard analysis may be warranted.</p

    Importance of competing risks in the analysis of anti-epileptic drug failure

    Get PDF
    BACKGROUND: Retention time (time to treatment failure) is a commonly used outcome in antiepileptic drug (AED) studies. METHODS: Two datasets are used to demonstrate the issues in a competing risks analysis of AEDs. First, data collection and follow-up considerations are discussed with reference to information from 15 monotherapy trials. Recommendations for improved data collection and cumulative incidence analysis are then illustrated using the SANAD trial dataset. The results are compared to the more common approach using standard survival analysis methods. RESULTS: A non-significant difference in overall treatment failure time between gabapentin and topiramate (logrank test statistic = 0.01, 1 degree of freedom, p-value = 0.91) masked highly significant differences in opposite directions with gabapentin resulting in fewer withdrawals due to side effects (Gray's test statistic = 11.60, 1 degree of freedom, p = 0.0007) but more due to poor seizure control (Gray's test statistic = 14.47, 1 degree of freedom, p-value = 0.0001). The significant difference in overall treatment failure time between lamotrigine and carbamazepine (logrank test statistic = 5.6, 1 degree of freedom, p-value = 0.018) was due entirely to a significant benefit of lamotrigine in terms of side effects (Gray's test statistic = 10.27, 1 degree of freedom, p = 0.001). CONCLUSION: Treatment failure time can be measured reliably but care is needed to collect sufficient information on reasons for drug withdrawal to allow a competing risks analysis. Important differences between the profiles of AEDs may be missed unless appropriate statistical methods are used to fully investigate treatment failure time. Cumulative incidence analysis allows comparison of the probability of failure between two AEDs and is likely to be a more powerful approach than logrank analysis for most comparisons of standard and new anti-epileptic drugs
    corecore