261 research outputs found

    The Measurement of Labor Cost

    Get PDF

    Handling Attrition in Longitudinal Studies: The Case for Refreshment Samples

    Get PDF
    Panel studies typically suffer from attrition, which reduces sample size and can result in biased inferences. It is impossible to know whether or not the attrition causes bias from the observed panel data alone. Refreshment samples - new, randomly sampled respondents given the questionnaire at the same time as a subsequent wave of the panel - offer information that can be used to diagnose and adjust for bias due to attrition. We review and bolster the case for the use of refreshment samples in panel studies. We include examples of both a fully Bayesian approach for analyzing the concatenated panel and refreshment data, and a multiple imputation approach for analyzing only the original panel. For the latter, we document a positive bias in the usual multiple imputation variance estimator. We present models appropriate for three waves and two refreshment samples, including nonterminal attrition. We illustrate the three-wave analysis using the 2007-2008 Associated Press-Yahoo! News Election Poll.Comment: Published in at http://dx.doi.org/10.1214/13-STS414 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Identifiability of Subgroup Causal Effects in Randomized Experiments with Nonignorable Missing Covariates

    Full text link
    Although randomized experiments are widely regarded as the gold standard for estimating causal effects, missing data of the pretreatment covariates makes it challenging to estimate the subgroup causal effects. When the missing data mechanism of the covariates is nonignorable, the parameters of interest are generally not pointly identifiable, and we can only get bounds for the parameters of interest, which may be too wide for practical use. In some real cases, we have prior knowledge that some restrictions may be plausible. We show the identifiability of the causal effects and joint distributions for four interpretable missing data mechanisms, and evaluate the performance of the statistical inference via simulation studies. One application of our methods to a real data set from a randomized clinical trial shows that one of the nonignorable missing data mechanisms fits better than the ignorable missing data mechanism, and the results conform to the study's original expert opinions. We also illustrate the potential applications of our methods to observational studies using a data set from a job-training program.Comment: Statistics in Medicine (2014

    Verifiable identification condition for nonignorable nonresponse data with categorical instrumental variables

    Full text link
    We consider a model identification problem in which an outcome variable contains nonignorable missing values. Statistical inference requires a guarantee of the model identifiability to obtain estimators enjoying theoretically reasonable properties such as consistency and asymptotic normality. Recently, instrumental or shadow variables, combined with the completeness condition in the outcome model, have been highlighted to make a model identifiable. However, the completeness condition may not hold even for simple models when the instrument is categorical. We propose a sufficient condition for model identifiability, which is applicable to cases where establishing the completeness condition is difficult. Using observed data, we demonstrate that the proposed conditions are easy to check for many practical models and outline their usefulness in numerical experiments and real data analysis

    Meta-Analysis of Studies with Missing Data

    Full text link
    Consider a meta-analysis of studies with varying proportions of patient-level missing data, and assume that each primary study has made certain missing data adjustments so that the reported estimates of treatment effect size and variance are valid. These estimates of treatment effects can be combined across studies by standard meta-analytic methods, employing a random-effects model to account for heterogeneity across studies. However, we note that a meta-analysis based on the standard random-effects model will lead to biased estimates when the attrition rates of primary studies depend on the size of the underlying study-level treatment effect. Perhaps ignorable within each study, these types of missing data are in fact not ignorable in a meta-analysis. We propose three methods to correct the bias resulting from such missing data in a meta-analysis: reweighting the DerSimonian–Laird estimate by the completion rate; incorporating the completion rate into a Bayesian random-effects model; and inference based on a Bayesian shared-parameter model that includes the completion rate. We illustrate these methods through a meta-analysis of 16 published randomized trials that examined combined pharmacotherapy and psychological treatment for depression.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/66327/1/j.1541-0420.2008.01068.x.pd
    • …
    corecore