219 research outputs found

    Comment on measuring the t-tbar forward-backward asymmetry at ATLAS and CMS

    Full text link
    We suggest a new possibility for ATLAS and CMS to explore the t-tbar forward-backward asymmetry measured at the Tevatron, by attempting to reconstruct t-tbar events, with one of the tops decaying semileptonically in the central region (|\eta| < 2.5) and the other decaying hadronically in the forward region (|\eta| > 2.5). For several models which give comparable Tevatron signals, we study the charge asymmetry at the LHC as a function of cuts on |\eta| and on the t-tbar invariant mass, m_{t-tbar}. We show that there is an interesting complementarity between cuts on |\eta| and m_{t-tbar} to suppress the dominant and symmetric gg -> t-tbar rate, and different combinations of cuts enhance the distinguishing power between models. This complementarity is likely to hold in other new physics scenarios as well, which affect the t-tbar cross section, so it motivates extending t-tbar reconstruction to higher |\eta|.Comment: 6 pages, 3 figures, 3 tables, v2: to match version appearing in PRD, resolution in figures improve

    (Machine) Learning to Do More with Less

    Full text link
    Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (that relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail -- both analytically and numerically -- with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC.Comment: 32 pages, 12 figures. Example code is provided at https://github.com/bostdiek/PublicWeaklySupervised . v3: Version published in JHEP, discussion adde

    Gamma-rays from Dark Showers with Twin Higgs Models

    Get PDF
    We consider a twin WIMP scenario whose twin sector contains a full dark copy of the SM hadrons, where the lightest twin particles are twin pions. By analogy to the standard WIMP paradigm, the dark matter (DM) freezes out through twin electroweak interactions, and annihilates into a dark shower of light twin hadrons. These are either stable or decay predominantly to standard model (SM) photons. We show that this 'hadrosymmetric' scenario can be consistent with all applicable astrophysical, cosmological and collider constraints. In order to decay the twin hadrons before the big-bang nucleosynthesis epoch, an additional portal between the SM and twin sector is required. In most cases we find this additional mediator is within reach of either the LHC or future intensity frontier experiments. Furthermore, we conduct simulations of the dark shower and consequent photon spectra. We find that fits of these spectra to the claimed galactic center gamma-ray excess seen by Fermi-LAT non-trivially coincide with regions of parameter space that both successfully generate the observed DM abundance and exhibit minimal fine-tuning.Comment: 45 pages, 11 figures, v2: journal version, extended discussions in Secs. III-V, references adde

    Chasing Accreted Structures within Gaia DR2 using Deep Learning

    Get PDF
    In previous work, we developed a deep neural network classifier that only relies on phase-space information to obtain a catalog of accreted stars based on the second data release of Gaia (DR2). In this paper, we apply two clustering algorithms to identify velocity substructure within this catalog. We focus on the subset of stars with line-of-sight velocity measurements that fall in the range of Galactocentric radii r ∈ [6.5, 9.5] kpc and vertical distances |z|<3 kpc. Known structures such as Gaia Enceladus and the Helmi stream are identified. The largest previously unknown structure, Nyx, is a vast stream consisting of at least 200 stars in the region of interest. This study displays the power of the machine-learning approach by not only successfully identifying known features but also discovering new kinematic structures that may shed light on the merger history of the Milky Way

    Constraining the Axion Portal with B -> K l+ l-

    Full text link
    We investigate the bounds on axionlike states from flavor-changing neutral current b->s decays, assuming the axion couples to the standard model through mixing with the Higgs sector. Such GeV-scale axions have received renewed attention in connection with observed cosmic ray excesses. We find that existing B->K l+ l- data impose stringent bounds on the axion decay constant in the multi-TeV range, relevant for constraining the "axion portal" model of dark matter. Such bounds also constrain light Higgs scenarios in the next-to-minimal supersymmetric standard model. These bounds can be improved by dedicated searches in B-factory data and at LHCb.Comment: 7 pages, 4 figures; v2: to match version to appear in PR

    Simulating collider physics on quantum computers using effective field theories

    Full text link
    Simulating the full dynamics of a quantum field theory over a wide range of energies requires exceptionally large quantum computing resources. Yet for many observables in particle physics, perturbative techniques are sufficient to accurately model all but a constrained range of energies within the validity of the theory. We demonstrate that effective field theories (EFTs) provide an efficient mechanism to separate the high energy dynamics that is easily calculated by traditional perturbation theory from the dynamics at low energy and show how quantum algorithms can be used to simulate the dynamics of the low energy EFT from first principles. As an explicit example we calculate the expectation values of vacuum-to-vacuum and vacuum-to-one-particle transitions in the presence of a time-ordered product of two Wilson lines in scalar field theory, an object closely related to those arising in EFTs of the Standard Model of particle physics. Calculations are performed using simulations of a quantum computer as well as measurements using the IBMQ Manhattan machine.Comment: 5 pages, plus 11 pages of Supplemental Material
    • …
    corecore