57,353 research outputs found

    On the sample mean after a group sequential trial

    Full text link
    A popular setting in medical statistics is a group sequential trial with independent and identically distributed normal outcomes, in which interim analyses of the sum of the outcomes are performed. Based on a prescribed stopping rule, one decides after each interim analysis whether the trial is stopped or continued. Consequently, the actual length of the study is a random variable. It is reported in the literature that the interim analyses may cause bias if one uses the ordinary sample mean to estimate the location parameter. For a generic stopping rule, which contains many classical stopping rules as a special case, explicit formulas for the expected length of the trial, the bias, and the mean squared error (MSE) are provided. It is deduced that, for a fixed number of interim analyses, the bias and the MSE converge to zero if the first interim analysis is performed not too early. In addition, optimal rates for this convergence are provided. Furthermore, under a regularity condition, asymptotic normality in total variation distance for the sample mean is established. A conclusion for naive confidence intervals based on the sample mean is derived. It is also shown how the developed theory naturally fits in the broader framework of likelihood theory in a group sequential trial setting. A simulation study underpins the theoretical findings.Comment: 52 pages (supplementary data file included

    Linear Dependency for the Difference in Exponential Regression

    Get PDF
    In the field of reliability, a lot has been written on the analysis of phenomena that are related. Estimation of the difference of two population means have been mostly formulated under the no-correlation assumption. However, in many situations, there is a correlation involved. This paper addresses this issue. A sequential estimation method for linearly related lifetime distributions is presented. Estimations for the scale parameters of the exponential distribution are given under square error loss using a sequential prediction method. Optimal stopping rules are discussed using concepts of mean criteria, and numerical results are presented

    Sequential control of time series by functionals of kernel-weighted empirical processes under local alternatives

    Get PDF
    Motivated in part by applications in model selection in statistical genetics and sequential monitoring of financial data, we study an empirical process framework for a class of stopping rules which rely on kernel-weighted averages of past data. We are interested in the asymptotic distribution for time series data and an analysis of the joint influence of the smoothing policy and the alternative defining the deviation from the null model (in-control state). We employ a certain type of local alternative which provides meaningful insights. Our results hold true for short memory processes which satisfy a weak mixing condition. By relying on an empirical process framework we obtain both asymptotic laws for the classical fixed sample design and the sequential monitoring design. As a by-product we establish the asymptotic distribution of the Nadaraya-Watson kernel smoother when the regressors do not get dense as the sample size increases. --

    On the Use of Stochastic Curtailment in Group Sequential Clinical Trials

    Get PDF
    Many different criteria have been proposed for the selection of a stopping rule for group sequen- tial trials. These include both scientific (e.g., estimates of treatment effect) and statistical (e.g., frequentist type I error, Bayesian posterior probabilities, stochastic curtailment) measures of the evidence for or against beneficial treatment effects. Because a stopping rule based on one of those criteria induces a stopping rule on all other criteria, the utility of any particular scale relates to the ease with which it allows a clinical trialist to search for sequential sampling plans having de- sirable operating characteristics. In this paper we examine the use of such measures as conditional power and predictive power in the definition of stopping rules, especially as they apply to decisions to terminate a study early for “futility”. We illustrate that stopping criteria based on stochastic curtailment are relatively difficult to interpret on the scientifically relevant scale of estimated treat- ment effects, as well as with respect to commonly used statistical measures such as unconditional power. We further argue that neither conditional power nor predictive power adhere to the stan- dard optimality criteria within either the frequentist or Bayesian data analysis paradigms. Thus when choosing a stopping rule for “futility”, we recommend the definition of stopping rules based on other criteria and careful evaluation of the frequentist and Bayesian operating characteristics that are of greatest scientific and statistical relevance

    Optimal sequential sampling rules for the economic evaluation of health technologies

    Get PDF
    Referring to the literature on optimal stopping under sequential sampling developed by Chernoff and collaborators, we solve a dynamic model of the economic evaluation of a new health technology, deriving optimal rules for technology adoption, research abandonment and continuation as functions of sample size. The model extends the existing literature to the case where an adoption decision can be deferred and involves a degree of irreversibility. We explore the model's applicability in a case study of the economic evaluation of Drug Eluting Stents (DES), deriving dynamic adoption and abandonment thresholds which are a function of the model's economic parameters. A key result is that referring to a single cost-effectiveness threshold may be sub-optimal.Cost-effectiveness analysis, Sequential sampling, Dynamic programming

    Sequential Control of Time Series by Functionals of Kernel-Weighted Empirical Processes under Local Alternatives

    Get PDF
    Motivated in part by applications in model selection in statistical genetics and sequential monitoring of financial data, we study an empirical process framework for a class of stopping rules which rely on kernel-weighted averages of past data. We are interested in the asymptotic distribution for time series data and an analysis of the joint influence of the smoothing policy and the alternative defining the deviation from the null model (in-control state). We employ a certain type of local alternative which provides meaningful insights. Our results hold true for short memory processes which satisfy a weak mixing condition. By relying on an empirical process framework we obtain both asymptotic laws for the classical fixed sample design and the sequential monitoring design. As a by-product we establish the asymptotic distribution of the Nadaraya-Watson kernel smoother when the regressors do not get dense as the sample size increases

    Optimal sequential procedures with Bayes decision rules

    Get PDF
    In this article, a general problem of sequential statistical inference for general discrete-time stochastic processes is considered. The problem is to minimize an average sample number given that Bayesian risk due to incorrect decision does not exceed some given bound. We characterize the form of optimal sequential stopping rules in this problem. In particular, we have a characterization of the form of optimal sequential decision procedures when the Bayesian risk includes both the loss due to incorrect decision and the cost of observations.Comment: Shortened version for print publication, 17 page
    corecore