1,572 research outputs found

    Evolution of online algorithms in ATLAS and CMS in Run 2

    Full text link
    The Large Hadron Collider has entered a new era in Run 2, with centre-of-mass energy of 13 TeV and instantaneous luminosity reaching Linst=1.4×\mathcal{L}_\textrm{inst} = 1.4\times1034^{34} cm−2^{-2} s−1^{-1} for pp collisions. In order to cope with those harsher conditions, the ATLAS and CMS collaborations have improved their online selection infrastructure to keep a high efficiency for important physics processes - like W, Z and Higgs bosons in their leptonic and diphoton modes - whilst keeping the size of data stream compatible with the bandwidth and disk resources available. In this note, we describe some of the trigger improvements implemented for Run 2, including algorithms for selection of electrons, photons, muons and hadronic final states.Comment: 6 pages. Presented at The Fifth Annual Conference on Large Hadron Collider Physics (LHCP 2017), Shanghai, China, May 15-20, 201

    The CMS Trigger Upgrade for the HL-LHC

    Get PDF
    The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented on custom-designed electronics, and the High Level Trigger, a streamlined version of the CMS offline reconstruction software running on a computer farm. During its second phase the LHC will reach a luminosity of 7.5×1034 cm−2 s−17.5\times10^{34}\,\textrm{cm}^{-2}\,\textrm{s}^{-1} with a pileup of 200 collisions, producing integrated luminosity greater than 3000 fb−1^{-1} over the full experimental run. To fully exploit the higher luminosity, the CMS experiment will introduce a more advanced Level-1 Trigger and increase the full readout rate from 100 kHz to 750 kHz. CMS is designing an efficient data-processing hardware trigger (Level-1) that will include tracking information and high-granularity calorimeter information. The current conceptual system design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency system for large throughput and sophisticated data correlation across diverse sources. The higher luminosity, event complexity and input rate present an unprecedented challenge to the High Level Trigger, that aims to achieve a similar efficiency and rejection factor as today despite the higher pileup and more pure preselection. In this presentation we will discuss the ongoing studies and prospects for the online reconstruction and selection algorithms for the high-luminosity era.Comment: 6 pages, 4 figures. Presented at CHEP 2019 - 24th International Conference on Computing in High Energy and Nuclear Physics, Adelaide, Australia, November 04-08, 2019. Replaced with published versio

    Evaluating generative models in high energy physics

    Full text link
    There has been a recent explosion in research into machine-learning-based generative modeling to tackle computational challenges for simulations in high energy physics (HEP). In order to use such alternative simulators in practice, we need well-defined metrics to compare different generative models and evaluate their discrepancy from the true distributions. We present the first systematic review and investigation into evaluation metrics and their sensitivity to failure modes of generative models, using the framework of two-sample goodness-of-fit testing, and their relevance and viability for HEP. Inspired by previous work in both physics and computer vision, we propose two new metrics, the Fr\'echet and kernel physics distances (FPD and KPD, respectively), and perform a variety of experiments measuring their performance on simple Gaussian-distributed, and simulated high energy jet datasets. We find FPD, in particular, to be the most sensitive metric to all alternative jet distributions tested and recommend its adoption, along with the KPD and Wasserstein distances between individual feature distributions, for evaluating generative models in HEP. We finally demonstrate the efficacy of these proposed metrics in evaluating and comparing a novel attention-based generative adversarial particle transformer to the state-of-the-art message-passing generative adversarial network jet simulation model. The code for our proposed metrics is provided in the open source JetNet Python library.Comment: 11 pages, 5 figures, 3 tables, and a 5 page appendi

    Particle-based Fast Jet Simulation at the LHC with Variational Autoencoders

    Full text link
    We study how to use Deep Variational Autoencoders for a fast simulation of jets of particles at the LHC. We represent jets as a list of constituents, characterized by their momenta. Starting from a simulation of the jet before detector effects, we train a Deep Variational Autoencoder to return the corresponding list of constituents after detection. Doing so, we bypass both the time-consuming detector simulation and the collision reconstruction steps of a traditional processing chain, speeding up significantly the events generation workflow. Through model optimization and hyperparameter tuning, we achieve state-of-the-art precision on the jet four-momentum, while providing an accurate description of the constituents momenta, and an inference time comparable to that of a rule-based fast simulation.Comment: 11 pages, 8 figure

    Measurement of differential cross sections for top quark pair production using the lepton plus jets final state in proton-proton collisions at 13 TeV

    Get PDF
    National Science Foundation (U.S.

    Particle-flow reconstruction and global event description with the CMS detector

    Get PDF
    The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMS performance for jet and hadronic tau decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8 TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions

    Identification of heavy-flavour jets with the CMS detector in pp collisions at 13 TeV

    Get PDF
    Many measurements and searches for physics beyond the standard model at the LHC rely on the efficient identification of heavy-flavour jets, i.e. jets originating from bottom or charm quarks. In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented. Heavy-flavour jet identification algorithms have been improved compared to those used previously at centre-of-mass energies of 7 and 8 TeV. For jets with transverse momenta in the range expected in simulated tt‟\mathrm{t}\overline{\mathrm{t}} events, these new developments result in an efficiency of 68% for the correct identification of a b jet for a probability of 1% of misidentifying a light-flavour jet. The improvement in relative efficiency at this misidentification probability is about 15%, compared to previous CMS algorithms. In addition, for the first time algorithms have been developed to identify jets containing two b hadrons in Lorentz-boosted event topologies, as well as to tag c jets. The large data sample recorded in 2016 at a centre-of-mass energy of 13 TeV has also allowed the development of new methods to measure the efficiency and misidentification probability of heavy-flavour jet identification algorithms. The heavy-flavour jet identification efficiency is measured with a precision of a few per cent at moderate jet transverse momenta (between 30 and 300 GeV) and about 5% at the highest jet transverse momenta (between 500 and 1000 GeV)