32 research outputs found

    Modal-based estimation via heterogeneity-penalized weighting: model averaging for consistent and efficient estimation in Mendelian randomization when a plurality of candidate instruments are valid.

    Get PDF
    BACKGROUND: A robust method for Mendelian randomization does not require all genetic variants to be valid instruments to give consistent estimates of a causal parameter. Several such methods have been developed, including a mode-based estimation method giving consistent estimates if a plurality of genetic variants are valid instruments; i.e. there is no larger subset of invalid instruments estimating the same causal parameter than the subset of valid instruments. METHODS: We here develop a model-averaging method that gives consistent estimates under the same 'plurality of valid instruments' assumption. The method considers a mixture distribution of estimates derived from each subset of genetic variants. The estimates are weighted such that subsets with more genetic variants receive more weight, unless variants in the subset have heterogeneous causal estimates, in which case that subset is severely down-weighted. The mode of this mixture distribution is the causal estimate. This heterogeneity-penalized model-averaging method has several technical advantages over the previously proposed mode-based estimation method. RESULTS: The heterogeneity-penalized model-averaging method outperformed the mode-based estimation in terms of efficiency and outperformed other robust methods in terms of Type 1 error rate in an extensive simulation analysis. The proposed method suggests two distinct mechanisms by which inflammation affects coronary heart disease risk, with subsets of variants suggesting both positive and negative causal effects. CONCLUSIONS: The heterogeneity-penalized model-averaging method is an additional robust method for Mendelian randomization with excellent theoretical and practical properties, and can reveal features in the data such as the presence of multiple causal mechanisms

    Using Instruments for Selection to Adjust for Selection Bias in Mendelian Randomization

    Full text link
    Selection bias is a common concern in epidemiologic studies. In the literature, selection bias is often viewed as a missing data problem. Popular approaches to adjust for bias due to missing data, such as inverse probability weighting, rely on the assumption that data are missing at random and can yield biased results if this assumption is violated. In observational studies with outcome data missing not at random, Heckman's sample selection model can be used to adjust for bias due to missing data. In this paper, we review Heckman's method and a similar approach proposed by Tchetgen Tchetgen and Wirth (2017). We then discuss how to apply these methods to Mendelian randomization analyses using individual-level data, with missing data for either the exposure or outcome or both. We explore whether genetic variants associated with participation can be used as instruments for selection. We then describe how to obtain missingness-adjusted Wald ratio, two-stage least squares and inverse variance weighted estimates. The two methods are evaluated and compared in simulations, with results suggesting that they can both mitigate selection bias but may yield parameter estimates with large standard errors in some settings. In an illustrative real-data application, we investigate the effects of body mass index on smoking using data from the Avon Longitudinal Study of Parents and Children.Comment: Main part: 27 pages, 3 figures, 4 tables. Supplement: 20 pages, 5 figures, 10 tables. Paper currently under revie

    Two sample Mendelian Randomisation using an outcome from a multilevel model of disease progression

    Get PDF
    Identifying factors that are causes of disease progression, especially in neurodegenerative diseases, is of considerable interest. Disease progression can be described as a trajectory of outcome over time—for example, a linear trajectory having both an intercept (severity at time zero) and a slope (rate of change). A technique for identifying causal relationships between one exposure and one outcome in observational data whilst avoiding bias due to confounding is two sample Mendelian Randomisation (2SMR). We consider a multivariate approach to 2SMR using a multilevel model for disease progression to estimate the causal effect an exposure has on the intercept and slope. We carry out a simulation study comparing a naïve univariate 2SMR approach to a multivariate 2SMR approach with one exposure that effects both the intercept and slope of an outcome that changes linearly with time since diagnosis. The simulation study results, across six different scenarios, for both approaches were similar with no evidence against a non-zero bias and appropriate coverage of the 95% confidence intervals (for intercept 93.4–96.2% and the slope 94.5–96.0%). The multivariate approach gives a better joint coverage of both the intercept and slope effects. We also apply our method to two Parkinson’s cohorts to examine the effect body mass index has on disease progression. There was no strong evidence that BMI affects disease progression, however the confidence intervals for both intercept and slope were wide

    Two sample Mendelian Randomisation using an outcome from a multilevel model of disease progression.

    Get PDF
    Identifying factors that are causes of disease progression, especially in neurodegenerative diseases, is of considerable interest. Disease progression can be described as a trajectory of outcome over time—for example, a linear trajectory having both an intercept (severity at time zero) and a slope (rate of change). A technique for identifying causal relationships between one exposure and one outcome in observational data whilst avoiding bias due to confounding is two sample Mendelian Randomisation (2SMR). We consider a multivariate approach to 2SMR using a multilevel model for disease progression to estimate the causal effect an exposure has on the intercept and slope. We carry out a simulation study comparing a naïve univariate 2SMR approach to a multivariate 2SMR approach with one exposure that effects both the intercept and slope of an outcome that changes linearly with time since diagnosis. The simulation study results, across six different scenarios, for both approaches were similar with no evidence against a non-zero bias and appropriate coverage of the 95% confidence intervals (for intercept 93.4–96.2% and the slope 94.5–96.0%). The multivariate approach gives a better joint coverage of both the intercept and slope effects. We also apply our method to two Parkinson’s cohorts to examine the effect body mass index has on disease progression. There was no strong evidence that BMI affects disease progression, however the confidence intervals for both intercept and slope were wide

    Genetic Determinants of Lipids and Cardiovascular Disease Outcomes: A Wide-Angled Mendelian Randomization Investigation.

    Get PDF
    BACKGROUND: Evidence from randomized trials has shown that therapies that lower LDL (low-density lipoprotein)-cholesterol and triglycerides reduce coronary artery disease (CAD) risk. However, there is still uncertainty about their effects on other cardiovascular outcomes. We therefore performed a systematic investigation of causal relationships between circulating lipids and cardiovascular outcomes using a Mendelian randomization approach. METHODS: In the primary analysis, we performed 2-sample multivariable Mendelian randomization using data from participants of European ancestry. We also conducted univariable analyses using inverse-variance weighted and robust methods, and gene-specific analyses using variants that can be considered as proxies for specific lipid-lowering medications. We obtained associations with lipid fractions from the Global Lipids Genetics Consortium, a meta-analysis of 188 577 participants, and genetic associations with cardiovascular outcomes from 367 703 participants in UK Biobank. RESULTS: For LDL-cholesterol, in addition to the expected positive associations with CAD risk (odds ratio [OR] per 1 SD increase, 1.45 [95% CI, 1.35-1.57]) and other atheromatous outcomes (ischemic cerebrovascular disease and peripheral vascular disease), we found independent associations of genetically predicted LDL-cholesterol with abdominal aortic aneurysm (OR, 1.75 [95% CI, 1.40-2.17]) and aortic valve stenosis (OR, 1.46 [95% CI, 1.25-1.70]). Genetically predicted triglyceride levels were positively associated with CAD (OR, 1.25 [95% CI, 1.12-1.40]), aortic valve stenosis (OR, 1.29 [95% CI, 1.04-1.61]), and hypertension (OR, 1.17 [95% CI, 1.07-1.27]), but inversely associated with venous thromboembolism (OR, 0.79 [95% CI, 0.67-0.93]) and hemorrhagic stroke (OR, 0.78 [95% CI, 0.62-0.98]). We also found positive associations of genetically predicted LDL-cholesterol and triglycerides with heart failure that appeared to be mediated by CAD. CONCLUSIONS: Lowering LDL-cholesterol is likely to prevent abdominal aortic aneurysm and aortic stenosis, in addition to CAD and other atheromatous cardiovascular outcomes. Lowering triglycerides is likely to prevent CAD and aortic valve stenosis but may increase thromboembolic risk

    Mendelian randomization for studying the effects of perturbing drug targets [version 1; peer review: awaiting peer review]

    Get PDF
    Drugs whose targets have genetic evidence to support efficacy and safety are more likely to be approved after clinical development. In this paper, we provide an overview of how natural sequence variation in the genes that encode drug targets can be used in Mendelian randomization analyses to offer insight into mechanism-based efficacy and adverse effects. Large databases of summary level genetic association data are increasingly available and can be leveraged to identify and validate variants that serve as proxies for drug target perturbation. As with all empirical research, Mendelian randomization has limitations including genetic confounding, its consideration of lifelong effects, and issues related to heterogeneity across different tissues and populations. When appropriately applied, Mendelian randomization provides a useful empirical framework for using population level data to improve the success rates of the drug development pipeline
    corecore