753 research outputs found

    Factorial Mendelian randomization: using genetic variants to assess interactions.

    Get PDF
    BACKGROUND: Factorial Mendelian randomization is the use of genetic variants to answer questions about interactions. Although the approach has been used in applied investigations, little methodological advice is available on how to design or perform a factorial Mendelian randomization analysis. Previous analyses have employed a 2 × 2 approach, using dichotomized genetic scores to divide the population into four subgroups as in a factorial randomized trial. METHODS: We describe two distinct contexts for factorial Mendelian randomization: investigating interactions between risk factors, and investigating interactions between pharmacological interventions on risk factors. We propose two-stage least squares methods using all available genetic variants and their interactions as instrumental variables, and using continuous genetic scores as instrumental variables rather than dichotomized scores. We illustrate our methods using data from UK Biobank to investigate the interaction between body mass index and alcohol consumption on systolic blood pressure. RESULTS: Simulated and real data show that efficiency is maximized using the full set of interactions between genetic variants as instruments. In the applied example, between 4- and 10-fold improvement in efficiency is demonstrated over the 2 × 2 approach. Analyses using continuous genetic scores are more efficient than those using dichotomized scores. Efficiency is improved by finding genetic variants that divide the population at a natural break in the distribution of the risk factor, or else divide the population into more equal-sized groups. CONCLUSIONS: Previous factorial Mendelian randomization analyses may have been underpowered. Efficiency can be improved by using all genetic variants and their interactions as instrumental variables, rather than the 2 × 2 approach

    Robust methods in Mendelian randomization via penalization of heterogeneous causal estimates.

    Get PDF
    Methods have been developed for Mendelian randomization that can obtain consistent causal estimates under weaker assumptions than the standard instrumental variable assumptions. The median-based estimator and MR-Egger are examples of such methods. However, these methods can be sensitive to genetic variants with heterogeneous causal estimates. Such heterogeneity may arise from over-dispersion in the causal estimates, or specific variants with outlying causal estimates. In this paper, we develop three extensions to robust methods for Mendelian randomization with summarized data: 1) robust regression (MM-estimation); 2) penalized weights; and 3) Lasso penalization. Methods using these approaches are considered in two applied examples: one where there is evidence of over-dispersion in the causal estimates (the causal effect of body mass index on schizophrenia risk), and the other containing outliers (the causal effect of low-density lipoprotein cholesterol on Alzheimer's disease risk). Through an extensive simulation study, we demonstrate that robust regression applied to the inverse-variance weighted method with penalized weights is a worthwhile additional sensitivity analysis for Mendelian randomization to provide robustness to variants with outlying causal estimates. The results from the applied examples and simulation study highlight the importance of using methods that make different assumptions to assess the robustness of findings from Mendelian randomization investigations with multiple genetic variants

    New Constraints (and Motivations) for Abelian Gauge Bosons in the MeV-TeV Mass Range

    Full text link
    We survey the phenomenological constraints on abelian gauge bosons having masses in the MeV to multi-GeV mass range (using precision electroweak measurements, neutrino-electron and neutrino-nucleon scattering, electron and muon anomalous magnetic moments, upsilon decay, beam dump experiments, atomic parity violation, low-energy neutron scattering and primordial nucleosynthesis). We compute their implications for the three parameters that in general describe the low-energy properties of such bosons: their mass and their two possible types of dimensionless couplings (direct couplings to ordinary fermions and kinetic mixing with Standard Model hypercharge). We argue that gauge bosons with very small couplings to ordinary fermions in this mass range are natural in string compactifications and are likely to be generic in theories for which the gravity scale is systematically smaller than the Planck mass - such as in extra-dimensional models - because of the necessity to suppress proton decay. Furthermore, because its couplings are weak, in the low-energy theory relevant to experiments at and below TeV scales the charge gauged by the new boson can appear to be broken, both by classical effects and by anomalies. In particular, if the new gauge charge appears to be anomalous, anomaly cancellation does not also require the introduction of new light fermions in the low-energy theory. Furthermore, the charge can appear to be conserved in the low-energy theory, despite the corresponding gauge boson having a mass. Our results reduce to those of other authors in the special cases where there is no kinetic mixing or there is no direct coupling to ordinary fermions, such as for recently proposed dark-matter scenarios.Comment: 49 pages + appendix, 21 figures. This is the final version which appears in JHE

    On Inflation with Non-minimal Coupling

    Full text link
    A simple realization of inflation consists of adding the following operators to the Einstein-Hilbert action: (partial phi)^2, lambda phi^4, and xi phi^2 R, with xi a large non-minimal coupling. Recently there has been much discussion as to whether such theories make sense quantum mechanically and if the inflaton phi can also be the Standard Model Higgs. In this note we answer these questions. Firstly, for a single scalar phi, we show that the quantum field theory is well behaved in the pure gravity and kinetic sectors, since the quantum generated corrections are small. However, the theory likely breaks down at ~ m_pl / xi due to scattering provided by the self-interacting potential lambda phi^4. Secondly, we show that the theory changes for multiple scalars phi with non-minimal coupling xi phi dot phi R, since this introduces qualitatively new interactions which manifestly generate large quantum corrections even in the gravity and kinetic sectors, spoiling the theory for energies > m_pl / xi. Since the Higgs doublet of the Standard Model includes the Higgs boson and 3 Goldstone bosons, it falls into the latter category and therefore its validity is manifestly spoiled. We show that these conclusions hold in both the Jordan and Einstein frames and describe an intuitive analogy in the form of the pion Lagrangian. We also examine the recent claim that curvature-squared inflation models fail quantum mechanically. Our work appears to go beyond the recent discussions.Comment: 14 pages, 2 figures. Version 2: Clarified findings and improved wording. Elaborated important sections and removed an unnecessary section. Added references. Version 3: Updated towards JHEP version. Version 4: Final JHEP versio

    Inflation with Non-minimal Gravitational Couplings and Supergravity

    Get PDF
    We explore in the supergravity context the possibility that a Higgs scalar may drive inflation via a non-minimal coupling to gravity characterised by a large dimensionless coupling constant. We find that this scenario is not compatible with the MSSM, but that adding a singlet field (NMSSM, or a variant thereof) can very naturally give rise to slow-roll inflation. The inflaton is necessarily contained in the doublet Higgs sector and occurs in the D-flat direction of the two Higgs doublets.Comment: 13 pages, 1 figur

    A global analysis of management capacity and ecological outcomes in terrestrial protected areas

    Get PDF
    Protecting important sites is a key strategy for halting the loss of biodiversity. However, our understanding of the relationship between management inputs and biodiversity outcomes in protected areas (PAs) remains weak. Here, we examine biodiversity outcomes using species population trends in PAs derived from the Living Planet Database in relation to management data derived from the Management Effectiveness Tracking Tool (METT) database for 217 population time‐series from 73 PAs. We found a positive relationship between our METT‐based scores for Capacity and Resources and changes in vertebrate abundance, consistent with the hypothesis that PAs require adequate resourcing to halt biodiversity loss. Additionally, PA age was negatively correlated with trends for the mammal subsets and PA size negatively correlated with population trends in the global subset. Our study highlights the paucity of appropriate data for rigorous testing of the role of management in maintaining species populations across multiple sites, and describes ways to improve our understanding of PA performance

    A global analysis of management capacity and ecological outcomes in terrestrial protected areas

    Get PDF
    Protecting important sites is a key strategy for halting the loss of biodiversity. However, our understanding of the relationship between management inputs and biodiversity outcomes in protected areas (PAs) remains weak. Here, we examine biodiversity outcomes using species population trends in PAs derived from the Living Planet Database in relation to management data derived from the Management Effectiveness Tracking Tool (METT) database for 217 population time-series from 73 PAs. We found a positive relationship between our METT-based scores for Capacity and Resources and changes in vertebrate abundance, consistent with the hypothesis that PAs require adequate resourcing to halt biodiversity loss. Additionally, PA age was negatively correlated with trends for the mammal subsets and PA size negatively correlated with population trends in the global subset. Our study highlights the paucity of appropriate data for rigorous testing of the role of management in maintaining species populations across multiple sites, and describes ways to improve our understanding of PA performance

    Manual engagement and automation in amateur photography

    Full text link
    © 2017, © The Author(s) 2017. Automation has been central to the development of modern photography and, in the age of digital and smartphone photography, now largely defines everyday experience of the photographic process. In this article, we question the acceptance of automation as the default position for photography, arguing that discussions of automation need to move beyond binary concerns of whether to automate or not and, instead, to consider what is being automated and the degree of automation couched within the particularities of people’s practices. We base this upon findings from ethnographic fieldwork with people engaging manually with film-based photography. While automation liberates people from having to interact with various processes of photography, participants in our study reported a greater sense of control, richer experiences and opportunities for experimentation when they were able to engage manually with photographic processes

    Holographic Anyons in the ABJM Theory

    Full text link
    We consider the holographic anyons in the ABJM theory from three different aspects of AdS/CFT correspondence. First, we identify the holographic anyons by using the field equations of supergravity, including the Chern-Simons terms of the probe branes. We find that the composite of Dp-branes wrapped over CP3 with the worldvolume magnetic fields can be the anyons. Next, we discuss the possible candidates of the dual anyonic operators on the CFT side, and find the agreement of their anyonic phases with the supergravity analysis. Finally, we try to construct the brane profile for the holographic anyons by solving the equations of motion and Killing spinor equations for the embedding profile of the wrapped branes. As a by product, we find a BPS spiky brane for the dual baryons in the ABJM theory.Comment: 1+33 pages, 3 figures; v2 discussion for D4-D6 case added, references added; v3 comments adde

    The mu problem and sneutrino inflation

    Get PDF
    We consider sneutrino inflation and post-inflation cosmology in the singlet extension of the MSSM with approximate Peccei-Quinn(PQ) symmetry, assuming that supersymmetry breaking is mediated by gauge interaction. The PQ symmetry is broken by the intermediate-scale VEVs of two flaton fields, which are determined by the interplay between radiative flaton soft masses and higher order terms. Then, from the flaton VEVs, we obtain the correct mu term and the right-handed(RH) neutrino masses for see-saw mechanism. We show that the RH sneutrino with non-minimal gravity coupling drives inflation, thanks to the same flaton coupling giving rise to the RH neutrino mass. After inflation, extra vector-like states, that are responsible for the radiative breaking of the PQ symmetry, results in thermal inflation with the flaton field, solving the gravitino problem caused by high reheating temperature. Our model predicts the spectral index to be n_s\simeq 0.96 due to the additional efoldings from thermal inflation. We show that a right dark matter abundance comes from the gravitino of 100 keV mass and a successful baryogenesis is possible via Affleck-Dine leptogenesis.Comment: 27 pages, no figures, To appear in JHE
    corecore