4,306 research outputs found

    Structural Nested Models and G-estimation: The Partially Realized Promise

    Get PDF
    Structural nested models (SNMs) and the associated method of G-estimation were first proposed by James Robins over two decades ago as approaches to modeling and estimating the joint effects of a sequence of treatments or exposures. The models and estimation methods have since been extended to dealing with a broader series of problems, and have considerable advantages over the other methods developed for estimating such joint effects. Despite these advantages, the application of these methods in applied research has been relatively infrequent; we view this as unfortunate. To remedy this, we provide an overview of the models and estimation methods as developed, primarily by Robins, over the years. We provide insight into their advantages over other methods, and consider some possible reasons for failure of the methods to be more broadly adopted, as well as possible remedies. Finally, we consider several extensions of the standard models and estimation methods.Comment: Published in at http://dx.doi.org/10.1214/14-STS493 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Likelihood Inference for Models with Unobservables: Another View

    Full text link
    There have been controversies among statisticians on (i) what to model and (ii) how to make inferences from models with unobservables. One such controversy concerns the difference between estimation methods for the marginal means not necessarily having a probabilistic basis and statistical models having unobservables with a probabilistic basis. Another concerns likelihood-based inference for statistical models with unobservables. This needs an extended-likelihood framework, and we show how one such extension, hierarchical likelihood, allows this to be done. Modeling of unobservables leads to rich classes of new probabilistic models from which likelihood-type inferences can be made naturally with hierarchical likelihood.Comment: This paper discussed in: [arXiv:1010.0804], [arXiv:1010.0807], [arXiv:1010.0810]. Rejoinder at [arXiv:1010.0814]. Published in at http://dx.doi.org/10.1214/09-STS277 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference

    Get PDF
    Background: Wiener-Granger causality (“G-causality”) is a statistical notion of causality applicable to time series data, whereby cause precedes, and helps predict, effect. It is defined in both time and frequency domains, and allows for the conditioning out of common causal influences. Originally developed in the context of econometric theory, it has since achieved broad application in the neurosciences and beyond. Prediction in the G-causality formalism is based on VAR (Vector AutoRegressive) modelling. New Method: The MVGC Matlab c Toolbox approach to G-causal inference is based on multiple equivalent representations of a VAR model by (i) regression parameters, (ii) the autocovariance sequence and (iii) the cross-power spectral density of the underlying process. It features a variety of algorithms for moving between these representations, enabling selection of the most suitable algorithms with regard to computational efficiency and numerical accuracy. Results: In this paper we explain the theoretical basis, computational strategy and application to empirical G-causal inference of the MVGC Toolbox. We also show via numerical simulations the advantages of our Toolbox over previous methods in terms of computational accuracy and statistical inference. Comparison with Existing Method(s): The standard method of computing G-causality involves estimation of parameters for both a full and a nested (reduced) VAR model. The MVGC approach, by contrast, avoids explicit estimation of the reduced model, thus eliminating a source of estimation error and improving statistical power, and in addition facilitates fast and accurate estimation of the computationally awkward case of conditional G-causality in the frequency domain. Conclusions: The MVGC Toolbox implements a flexible, powerful and efficient approach to G-causal inference. Keywords: Granger causality, vector autoregressive modelling, time series analysi

    Methods for Clustered Competing Risks Data and Causal Inference using Instrumental Variables for Censored Time-to-event Data

    Full text link
    In this dissertation, we propose new methods for analysis of clustered competing risks data (Chapters 1 and 2) and for instrumental variable (IV) analysis of univariate censored time-to-event data and competing risks data (Chapters 3 and 4). In Chapter 1, we propose estimating center effects through cause-specific proportional hazards frailty models that allow correlation among a center’s cause-specific effects. To evaluate center performance, we propose a directly standardized excess cumulative incidence (ECI) measure. We apply our methods to evaluate Organ Procurement Organizations with respect to (i) receipt of a kidney transplant and (ii) death on the wait-list. In Chapter 2, we propose to model the effects of cluster and individual-level covariates directly on the cumulative incidence functions of each risk through a semiparametric mixture component model with cluster-specific random effects. Our model permits joint inference on all competing events and provides estimates of the effects of clustering. We apply our method to multicenter competing risks data. In Chapter 3, we turn our focus to causal inference in the censored time-to-event setting in the presence of unmeasured confounders. We develop weighted IV estimators of the complier average causal effect on the restricted mean survival time. Our method accommodates instrument-outcome confounding and covariate dependent censoring. We establish the asymptotic properties, derive easily implementable variance estimators, and apply our method to compare modalities for end stage renal disease (ESRD) patients using national registry data. In Chapter 4, we develop IV analysis methods for competing risks data. Our method permits simultaneous inference of exposure effects on the absolute risk of all competing events and accommodates exposure dependent censoring. We apply the methods to compare dialytic modalities for ESRD patients with respect to risk of death from (i) cardiovascular diseases and (ii) other causes.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144110/1/shdharma_1.pd

    Standardization and Control for Confounding in Observational Studies: A Historical Perspective

    Full text link
    Control for confounders in observational studies was generally handled through stratification and standardization until the 1960s. Standardization typically reweights the stratum-specific rates so that exposure categories become comparable. With the development first of loglinear models, soon also of nonlinear regression techniques (logistic regression, failure time regression) that the emerging computers could handle, regression modelling became the preferred approach, just as was already the case with multiple regression analysis for continuous outcomes. Since the mid 1990s it has become increasingly obvious that weighting methods are still often useful, sometimes even necessary. On this background we aim at describing the emergence of the modelling approach and the refinement of the weighting approach for confounder control.Comment: Published in at http://dx.doi.org/10.1214/13-STS453 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Combining multiple observational data sources to estimate causal effects

    Full text link
    The era of big data has witnessed an increasing availability of multiple data sources for statistical analyses. We consider estimation of causal effects combining big main data with unmeasured confounders and smaller validation data with supplementary information on these confounders. Under the unconfoundedness assumption with completely observed confounders, the smaller validation data allow for constructing consistent estimators for causal effects, but the big main data can only give error-prone estimators in general. However, by leveraging the information in the big main data in a principled way, we can improve the estimation efficiencies yet preserve the consistencies of the initial estimators based solely on the validation data. Our framework applies to asymptotically normal estimators, including the commonly-used regression imputation, weighting, and matching estimators, and does not require a correct specification of the model relating the unmeasured confounders to the observed variables. We also propose appropriate bootstrap procedures, which makes our method straightforward to implement using software routines for existing estimators
    corecore