7,098 research outputs found

    Selection models with monotone weight functions in meta analysis

    Full text link
    Publication bias, the fact that studies identified for inclusion in a meta analysis do not represent all studies on the topic of interest, is commonly recognized as a threat to the validity of the results of a meta analysis. One way to explicitly model publication bias is via selection models or weighted probability distributions. We adopt the nonparametric approach initially introduced by Dear (1992) but impose that the weight function ww is monotonely non-increasing as a function of the pp-value. Since in meta analysis one typically only has few studies or "observations", regularization of the estimation problem seems sensible. In addition, virtually all parametric weight functions proposed so far in the literature are in fact decreasing. We discuss how to estimate a decreasing weight function in the above model and illustrate the new methodology on two well-known examples. The new approach potentially offers more insight in the selection process than other methods and is more flexible than parametric approaches. Some basic properties of the log-likelihood function and computation of a pp-value quantifying the evidence against the null hypothesis of a constant weight function are indicated. In addition, we provide an approximate selection bias adjusted profile likelihood confidence interval for the treatment effect. The corresponding software and the datasets used to illustrate it are provided as the R package selectMeta. This enables full reproducibility of the results in this paper.Comment: 15 pages, 2 figures. Some minor changes according to reviewer comment

    Nonparametric Bounds and Sensitivity Analysis of Treatment Effects

    Get PDF
    This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed.Comment: Published in at http://dx.doi.org/10.1214/14-STS499 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Formal and Informal Model Selection with Incomplete Data

    Full text link
    Model selection and assessment with incomplete data pose challenges in addition to the ones encountered with complete data. There are two main reasons for this. First, many models describe characteristics of the complete data, in spite of the fact that only an incomplete subset is observed. Direct comparison between model and data is then less than straightforward. Second, many commonly used models are more sensitive to assumptions than in the complete-data situation and some of their properties vanish when they are fitted to incomplete, unbalanced data. These and other issues are brought forward using two key examples, one of a continuous and one of a categorical nature. We argue that model assessment ought to consist of two parts: (i) assessment of a model's fit to the observed data and (ii) assessment of the sensitivity of inferences to unverifiable assumptions, that is, to how a model described the unobserved data given the observed ones.Comment: Published in at http://dx.doi.org/10.1214/07-STS253 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    ACCOUNTING FOR MONOTONE ATTRITION IN A POSTPARTUM DEPRESSION CLINICAL TRIAL

    Get PDF
    Longitudinal studies in public health, medicine and the social sciences are often complicated by monotone attrition, where a participant drops out before the end of the study and all his/her subsequent measurements are missing. To obtain accurate non-biased results, it is of public health importance to utilize appropriate missing data analytic methods to address the issue of monotone attrition.The defining feature of longitudinal studies is that several measurements are taken for each participant over time. The commonly used methods to analyze incomplete longitudinal data, complete case analysis and last observation carried forward, are not recommended because they produce biased estimators. Simple imputation and multiple imputation procedures provide alternative approaches for addressing monotone attrition. However, simple imputation is difficult in a multivariate setting and produces biased estimators. Multiple imputation addresses those shortcomings and allows a straightforward assessment of the sensitivity of inferences to various models for non-response. This thesis reviews the literature on missing data mechanisms and missing data analysis methods for monotone attrition. Data from a postpartum depression clinical trial comparing the effects of two drugs (Nortriptyline and Sertraline) on remission status at 8 weeks were re-analyzed using these methods. The original analysis, which only used available data, was replicated first. Then patterns and predictors of attrition were identified. Last observation carried forward, mean imputation and multiple imputation were used to account for both monotone attrition and a small number of intermittent missing measurements. In multiple imputation, every missing measurement was imputed 6 times by predictive matching. Each of the 6 completed data sets was analyzed separately and the results of all the analyses were combined to get the overall estimate and standard errors. In each analysis, continuous remission levels were imputed but the probability of remission was analyzed. The original conclusion of no significant difference in probability of remission at week 8 between the two drug groups was sustained even after carrying the missing measurements forward, mean and multiple imputations. Most drop outs occurred during the first three weeks and participants taking Sertraline who live alone were more likely to drop out

    Analysing randomised controlled trials with missing data : Choice of approach affects conclusions

    Get PDF
    Copyright © 2012 Elsevier Inc. All rights reserved. PMID: 22265924 [PubMed - indexed for MEDLINE]Peer reviewedPostprin
    corecore