2,111 research outputs found
Evolution of online algorithms in ATLAS and CMS in Run 2
The Large Hadron Collider has entered a new era in Run 2, with centre-of-mass
energy of 13 TeV and instantaneous luminosity reaching
10 cm s for pp
collisions. In order to cope with those harsher conditions, the ATLAS and CMS
collaborations have improved their online selection infrastructure to keep a
high efficiency for important physics processes - like W, Z and Higgs bosons in
their leptonic and diphoton modes - whilst keeping the size of data stream
compatible with the bandwidth and disk resources available. In this note, we
describe some of the trigger improvements implemented for Run 2, including
algorithms for selection of electrons, photons, muons and hadronic final
states.Comment: 6 pages. Presented at The Fifth Annual Conference on Large Hadron
Collider Physics (LHCP 2017), Shanghai, China, May 15-20, 201
The CMS Trigger Upgrade for the HL-LHC
The CMS experiment has been designed with a two-level trigger system: the
Level-1 Trigger, implemented on custom-designed electronics, and the High Level
Trigger, a streamlined version of the CMS offline reconstruction software
running on a computer farm. During its second phase the LHC will reach a
luminosity of with a
pileup of 200 collisions, producing integrated luminosity greater than 3000
fb over the full experimental run. To fully exploit the higher
luminosity, the CMS experiment will introduce a more advanced Level-1 Trigger
and increase the full readout rate from 100 kHz to 750 kHz. CMS is designing an
efficient data-processing hardware trigger (Level-1) that will include tracking
information and high-granularity calorimeter information. The current
conceptual system design is expected to take full advantage of advances in FPGA
and link technologies over the coming years, providing a high-performance,
low-latency system for large throughput and sophisticated data correlation
across diverse sources. The higher luminosity, event complexity and input rate
present an unprecedented challenge to the High Level Trigger, that aims to
achieve a similar efficiency and rejection factor as today despite the higher
pileup and more pure preselection. In this presentation we will discuss the
ongoing studies and prospects for the online reconstruction and selection
algorithms for the high-luminosity era.Comment: 6 pages, 4 figures. Presented at CHEP 2019 - 24th International
Conference on Computing in High Energy and Nuclear Physics, Adelaide,
Australia, November 04-08, 2019. Replaced with published versio
Evaluating generative models in high energy physics
There has been a recent explosion in research into machine-learning-based
generative modeling to tackle computational challenges for simulations in high
energy physics (HEP). In order to use such alternative simulators in practice,
we need well-defined metrics to compare different generative models and
evaluate their discrepancy from the true distributions. We present the first
systematic review and investigation into evaluation metrics and their
sensitivity to failure modes of generative models, using the framework of
two-sample goodness-of-fit testing, and their relevance and viability for HEP.
Inspired by previous work in both physics and computer vision, we propose two
new metrics, the Fr\'echet and kernel physics distances (FPD and KPD,
respectively), and perform a variety of experiments measuring their performance
on simple Gaussian-distributed, and simulated high energy jet datasets. We find
FPD, in particular, to be the most sensitive metric to all alternative jet
distributions tested and recommend its adoption, along with the KPD and
Wasserstein distances between individual feature distributions, for evaluating
generative models in HEP. We finally demonstrate the efficacy of these proposed
metrics in evaluating and comparing a novel attention-based generative
adversarial particle transformer to the state-of-the-art message-passing
generative adversarial network jet simulation model. The code for our proposed
metrics is provided in the open source JetNet Python library.Comment: 11 pages, 5 figures, 3 tables, and a 5 page appendi
Particle-based Fast Jet Simulation at the LHC with Variational Autoencoders
We study how to use Deep Variational Autoencoders for a fast simulation of
jets of particles at the LHC. We represent jets as a list of constituents,
characterized by their momenta. Starting from a simulation of the jet before
detector effects, we train a Deep Variational Autoencoder to return the
corresponding list of constituents after detection. Doing so, we bypass both
the time-consuming detector simulation and the collision reconstruction steps
of a traditional processing chain, speeding up significantly the events
generation workflow. Through model optimization and hyperparameter tuning, we
achieve state-of-the-art precision on the jet four-momentum, while providing an
accurate description of the constituents momenta, and an inference time
comparable to that of a rule-based fast simulation.Comment: 11 pages, 8 figure
Background Monte Carlo Samples for a Future Hadron Collider
A description of Standard Model background Monte Carlo samples produced for
studies related to future hadron colliders
- …