344,868 research outputs found

    On the sample mean after a group sequential trial

    Full text link
    A popular setting in medical statistics is a group sequential trial with independent and identically distributed normal outcomes, in which interim analyses of the sum of the outcomes are performed. Based on a prescribed stopping rule, one decides after each interim analysis whether the trial is stopped or continued. Consequently, the actual length of the study is a random variable. It is reported in the literature that the interim analyses may cause bias if one uses the ordinary sample mean to estimate the location parameter. For a generic stopping rule, which contains many classical stopping rules as a special case, explicit formulas for the expected length of the trial, the bias, and the mean squared error (MSE) are provided. It is deduced that, for a fixed number of interim analyses, the bias and the MSE converge to zero if the first interim analysis is performed not too early. In addition, optimal rates for this convergence are provided. Furthermore, under a regularity condition, asymptotic normality in total variation distance for the sample mean is established. A conclusion for naive confidence intervals based on the sample mean is derived. It is also shown how the developed theory naturally fits in the broader framework of likelihood theory in a group sequential trial setting. A simulation study underpins the theoretical findings.Comment: 52 pages (supplementary data file included

    Parallel Simulations for Analysing Portfolios of Catastrophic Event Risk

    Full text link
    At the heart of the analytical pipeline of a modern quantitative insurance/reinsurance company is a stochastic simulation technique for portfolio risk analysis and pricing process referred to as Aggregate Analysis. Support for the computation of risk measures including Probable Maximum Loss (PML) and the Tail Value at Risk (TVAR) for a variety of types of complex property catastrophe insurance contracts including Cat eXcess of Loss (XL), or Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures is obtained in Aggregate Analysis. In this paper, we explore parallel methods for aggregate risk analysis. A parallel aggregate risk analysis algorithm and an engine based on the algorithm is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and in C and CUDA for many-core GPUs. Performance analysis of the algorithm indicates that GPUs offer an alternative HPC solution for aggregate risk analysis that is cost effective. The optimised algorithm on the GPU performs a 1 million trial aggregate simulation with 1000 catastrophic events per trial on a typical exposure set and contract structure in just over 20 seconds which is approximately 15x times faster than the sequential counterpart. This can sufficiently support the real-time pricing scenario in which an underwriter analyses different contractual terms and pricing while discussing a deal with a client over the phone.Comment: Proceedings of the Workshop at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2012, 8 page

    Stroke treatment academic industry roundtable recommendations for individual data pooling analyses in stroke

    Get PDF
    Pooled analysis of individual patient data from stroke trials can deliver more precise estimates of treatment effect, enhance power to examine prespecified subgroups, and facilitate exploration of treatment-modifying influences. Analysis plans should be declared, and preferably published, before trial results are known. For pooling trials that used diverse analytic approaches, an ordinal analysis is favored, with justification for considering deaths and severe disability jointly. Because trial pooling is an incremental process, analyses should follow a sequential approach, with statistical adjustment for iterations. Updated analyses should be published when revised conclusions have a clinical implication. However, caution is recommended in declaring pooled findings that may prejudice ongoing trials, unless clinical implications are compelling. All contributing trial teams should contribute to leadership, data verification, and authorship of pooled analyses. Development work is needed to enable reliable inferences to be drawn about individual drug or device effects that contribute to a pooled analysis, versus a class effect, if the treatment strategy combines ≥2 such drugs or devices. Despite the practical challenges, pooled analyses are powerful and essential tools in interpreting clinical trial findings and advancing clinical care

    Exploring the Benefits of Adaptive Sequential Designs in Time-to-Event Endpoint Settings

    Get PDF
    Sequential analysis is frequently employed to address ethical and financial issues in clinical trials. Sequential analysis may be performed using standard group sequential designs, or, more recently, with adaptive designs that use estimates of treatment effect to modify the maximal statistical information to be collected. In the general setting in which statistical information and clinical trial costs are functions of the number of subjects used, it has yet to be established whether there is any major efficiency advantage to adaptive designs over traditional group sequential designs. In survival analysis, however, statistical information (and hence efficiency) is most closely related to the observed number of events, while trial costs still depend on the number of patients accrued. As the number of subjects may dominate the cost of a trial, an adaptive design that specifies a reduced maximal possible sample size when an extreme treatment effect has been observed may allow early termination of accrual and therefore a more costefficient trial. We investigate and compare the tradeoffs between efficiency (as measured by average number of observed events required), power, and cost (a function of the number of subjects accrued and length of observation) for standard group sequential methods and an adaptive design that allows for early termination of accrual. We find that when certain trial design parameters are constrained, an adaptive approach to terminating subject accrual may improve upon the cost efficiency of a group sequential clinical trial investigating time-to-event endpoints. However, when the spectrum of group sequential designs considered is broadened, the advantage of the adaptive designs is less clear

    Multi-center clinical trials: Randomization and ancillary statistics

    Full text link
    The purpose of this paper is to investigate and develop methods for analysis of multi-center randomized clinical trials which only rely on the randomization process as a basis of inference. Our motivation is prompted by the fact that most current statistical procedures used in the analysis of randomized multi-center studies are model based. The randomization feature of the trials is usually ignored. An important characteristic of model based analysis is that it is straightforward to model covariates. Nevertheless, in nearly all model based analyses, the effects due to different centers and, in general, the design of the clinical trials are ignored. An alternative to a model based analysis is to have analyses guided by the design of the trial. Our development of design based methods allows the incorporation of centers as well as other features of the trial design. The methods make use of conditioning on the ancillary statistics in the sample space generated by the randomization process. We have investigated the power of the methods and have found that, in the presence of center variation, there is a significant increase in power. The methods have been extended to group sequential trials with similar increases in power.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS151 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Meta- and Trial Sequential Analysis

    Get PDF
    Objectives Periodontal treatment might reduce adverse pregnancy outcomes. The efficacy of periodontal treatment to prevent preterm birth, low birth weight, and perinatal mortality was evaluated using meta-analysis and trial sequential analysis. Methods An existing systematic review was updated and meta-analyses performed. Risk of bias, heterogeneity, and publication bias were evaluated, and meta-regression performed. Subgroup analysis was used to compare different studies with low and high risk of bias and different populations, i.e., risk groups. Trial sequential analysis was used to assess risk of random errors. Results Thirteen randomized clinical trials evaluating 6283 pregnant women were meta-analyzed. Four and nine trials had low and high risk of bias, respectively. Overall, periodontal treatment had no significant effect on preterm birth (odds ratio [95% confidence interval] 0.79 [0.57-1.10]) or low birth weight (0.69 [0.43-1.13]). Trial sequential analysis demonstrated that futility was not reached for any of the outcomes. For populations with moderate occurrence (<20%) of preterm birth or low birth weight, periodontal treatment was not efficacious for any of the outcomes, and trial sequential analyses indicated that further trials might be futile. For populations with high occurrence (≥20%) of preterm birth and low birth weight, periodontal treatment seemed to reduce the risk of preterm birth (0.42 [0.24-0.73]) and low birth weight (0.32 [0.15-0.67]), but trial sequential analyses showed that firm evidence was not reached. Periodontal treatment did not significantly affect perinatal mortality, and firm evidence was not reached. Risk of bias, but not publication bias or patients’ age modified the effect estimates. Conclusions Providing periodontal treatment to pregnant women could potentially reduce the risks of perinatal outcomes, especially in mothers with high risks. Conclusive evidence could not be reached due to risks of bias, risks of random errors, and unclear effects of confounding. Further randomized clinical trials are required

    Exercise for lower limb osteoarthritis : systematic review incorporating trial sequential analysis and network meta-analysis

    Get PDF
    Objective: To determine whether there is sufficient evidence to conclude that exercise interventions are more effective than no exercise control and to compare the effectiveness of different exercise interventions in relieving pain and improving function in patients with lower limb osteoarthritis. Data sources: Nine electronic databases searched from inception to March 2012. Study selection: Randomised controlled trials comparing exercise interventions with each other or with no exercise control for adults with knee or hip osteoarthritis. Data extraction: Two reviewers evaluated eligibility and methodological quality. Main outcomes extracted were pain intensity and limitation of function. Trial sequential analysis was used to investigate reliability and conclusiveness of available evidence for exercise interventions. Bayesian network meta-analysis was used to combine both direct (within trial) and indirect (between trial) evidence on treatment effectiveness. Results: 60 trials (44 knee, two hip, 14 mixed) covering 12 exercise interventions and with 8218 patients met inclusion criteria. Sequential analysis showed that as of 2002 sufficient evidence had been accrued to show significant benefit of exercise interventions over no exercise control. For pain relief, strengthening, flexibility plus strengthening, flexibility plus strengthening plus aerobic, aquatic strengthening, and aquatic strengthening plus flexibility, exercises were significantly more effective than no exercise control. A combined intervention of strengthening, flexibility, and aerobic exercise was also significantly more effective than no exercise control for improving limitation in function (standardised mean difference −0.63, 95% credible interval −1.16 to −0.10). Conclusions: As of 2002 sufficient evidence had accumulated to show significant benefit of exercise over no exercise in patients with osteoarthritis, and further trials are unlikely to overturn this result. An approach combining exercises to increase strength, flexibility, and aerobic capacity is likely to be most effective in the management of lower limb osteoarthritis. The evidence is largely from trials in patients with knee osteoarthritis

    Stochastic models of evidence accumulation in changing environments

    Get PDF
    Organisms and ecological groups accumulate evidence to make decisions. Classic experiments and theoretical studies have explored this process when the correct choice is fixed during each trial. However, we live in a constantly changing world. What effect does such impermanence have on classical results about decision making? To address this question we use sequential analysis to derive a tractable model of evidence accumulation when the correct option changes in time. Our analysis shows that ideal observers discount prior evidence at a rate determined by the volatility of the environment, and the dynamics of evidence accumulation is governed by the information gained over an average environmental epoch. A plausible neural implementation of an optimal observer in a changing environment shows that, in contrast to previous models, neural populations representing alternate choices are coupled through excitation. Our work builds a bridge between statistical decision making in volatile environments and stochastic nonlinear dynamics.Comment: 26 pages, 7 figure
    corecore