5 research outputs found
Precision-Machine Learning for the Matrix Element Method
The matrix element method is the LHC inference method of choice for limited
statistics. We present a dedicated machine learning framework, based on
efficient phase-space integration, a learned acceptance and transfer function.
It is based on a choice of INN and diffusion networks, and a transformer to
solve jet combinatorics. We showcase this setup for the CP-phase of the top
Yukawa coupling in associated Higgs and single-top production.Comment: 24 pages, 11 figures, v2: update reference
Jet Diffusion versus JetGPT -- Modern Networks for the LHC
We introduce two diffusion models and an autoregressive transformer for LHC
physics simulations. Bayesian versions allow us to control the networks and
capture training uncertainties. After illustrating their different density
estimation methods for simple toy models, we discuss their advantages for Z
plus jets event generation. While diffusion networks excel through their
precision, the transformer scales best with the phase space dimensionality.
Given the different training and evaluation speed, we expect LHC physics to
benefit from dedicated use cases for normalizing flows, diffusion models, and
autoregressive transformers.Comment: 37 pages, 17 figure
Precision-Machine Learning for the Matrix Element Method
International audienceThe matrix element method is the LHC inference method of choice for limited statistics. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase this setup for the CP-phase of the top Yukawa coupling in associated Higgs and single-top production
Jet Diffusion versus JetGPT -- Modern Networks for the LHC
International audienceWe introduce two diffusion models and an autoregressive transformer for LHC physics simulations. Bayesian versions allow us to control the networks and capture training uncertainties. After illustrating their different density estimation methods for simple toy models, we discuss their advantages for Z plus jets event generation. While diffusion networks excel through their precision, the transformer scales best with the phase space dimensionality. Given the different training and evaluation speed, we expect LHC physics to benefit from dedicated use cases for normalizing flows, diffusion models, and autoregressive transformers
The Landscape of Unfolding with Machine Learning
International audienceRecent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex observables. Given that these approaches are conceptually diverse, they offer an exciting toolkit for a new class of measurements that can probe the Standard Model with an unprecedented level of detail and may enable sensitivity to new phenomena