81 research outputs found

    Frequentist versus Bayesian approaches to multiple testing

    Get PDF
    Multiple tests arise frequently in epidemiologic research. However, the issue of multiplicity adjustment is surrounded by confusion and controversy, and there is no uniform agreement on whether or when adjustment is warranted. In this paper we compare frequentist and Bayesian frameworks for multiple testing. We argue that the frequentist framework leads to logical difficulties, and is unable to distinguish between relevant and irrelevant multiplicity adjustments. We further argue that these logical difficulties resolve within the Bayesian framework, and that the Bayesian framework makes a clear and coherent distinction between relevant and irrelevant adjustments. We use Directed Acyclic Graphs to illustrate the differences between the two frameworks, and to motivate our arguments

    Sensitivity Analysis of G-estimators to Invalid Instrumental Variables

    Full text link
    Instrumental variables regression is a tool that is commonly used in the analysis of observational data. The instrumental variables are used to make causal inference about the effect of a certain exposure in the presence of unmeasured confounders. A valid instrumental variable is a variable that is associated with the exposure, affects the outcome only through the exposure (exclusion criterion), and is not confounded with the outcome (exogeneity). These assumptions are generally untestable and rely on subject-matter knowledge. Therefore, a sensitivity analysis is desirable to assess the impact of assumptions violation on the estimated parameters. In this paper, we propose and demonstrate a new method of sensitivity analysis for G-estimators in causal linear and non-linear models. We introduce two novel aspects of sensitivity analysis in instrumental variables studies. The first is a single sensitivity parameter that captures violations of exclusion and exogeneity assumptions. The second is an application of the method to non-linear models. The introduced framework is theoretically justified and is illustrated via a simulation study. Finally, we illustrate the method by application to real-world data and provide practitioners with guidelines on conducting sensitivity analysis.Comment: Published version of the pape

    Estimation of the Number Needed to Treat, the Number Needed to Expose, and the Exposure Impact Number with Instrumental Variables

    Full text link
    The Number needed to treat (NNT) is an efficacy index defined as the average number of patients needed to treat to attain one additional treatment benefit. In observational studies, specifically in epidemiology, the adequacy of the populationwise NNT is questionable since the exposed group characteristics may substantially differ from the unexposed. To address this issue, groupwise efficacy indices were defined: the Exposure Impact Number (EIN) for the exposed group and the Number Needed to Expose (NNE) for the unexposed. Each defined index answers a unique research question since it targets a unique sub-population. In observational studies, the group allocation is typically affected by confounders that might be unmeasured. The available estimation methods that rely either on randomization or the sufficiency of the measured covariates for confounding control will result in inconsistent estimators of the true NNT (EIN, NNE) in such settings. Using Rubin's potential outcomes framework, we explicitly define the NNT and its derived indices as causal contrasts. Next, we introduce a novel method that uses instrumental variables to estimate the three aforementioned indices in observational studies. We present two analytical examples and a corresponding simulation study. The simulation study illustrates that the novel estimators are consistent, unlike the previously available methods, and their confidence intervals meet the nominal coverage rates. Finally, a real-world data example of the effect of vitamin D deficiency on the mortality rate is presented

    Hauerwas and the Law: Framing a Productive Conversation

    Get PDF
    Background: Meal-Q and its shorter version, MiniMeal-Q, are 2 new Web-based food frequency questionnaires. Their meal-based and interactive format was designed to promote ease of use and to minimize answering time, desirable improvements in large epidemiological studies. Objective: We evaluated the validity of energy and macronutrient intake assessed with Meal-Q and MiniMeal-Q as well as the reproducibility of Meal-Q. Methods: Healthy volunteers aged 20-63 years recruited from Stockholm County filled out the 174-item Meal-Q. The questionnaire was compared to 7-day weighed food records (WFR; n=163), for energy and macronutrient intake, and to doubly labeled water (DLW; n=39), for total energy expenditure. In addition, the 126-item MiniMeal-Q was evaluated in a simulated validation using truncated Meal-Q data. We also assessed the answering time and ease of use of both questionnaires. Results: Bland-Altman plots showed a varying bias within the intake range for all validity comparisons. Cross-classification of quartiles placed 70%-86% in the same/adjacent quartile with WFR and 77% with DLW. Deattenuated and energy-adjusted Pearson correlation coefficients with the WFR ranged from r=0.33-0.74 for macronutrients and was r=0.18 for energy. Correlations with DLW were r=0.42 for Meal-Q and r=0.38 for MiniMeal-Q. Intraclass correlations for Meal-Q ranged from r=0.57-0.90. Median answering time was 17 minutes for Meal-Q and 7 minutes for MiniMeal-Q, and participants rated both questionnaires as easy to use. Conclusions: Meal-Q and MiniMeal-Q are easy to use and have short answering times. The ranking agreement is good for most of the nutrients for both questionnaires and Meal-Q shows fair reproducibility.QC 20130709</p
    corecore