19 research outputs found

    Precision-Machine Learning for the Matrix Element Method

    Full text link
    The matrix element method is the LHC inference method of choice for limited statistics. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase this setup for the CP-phase of the top Yukawa coupling in associated Higgs and single-top production.Comment: 24 pages, 11 figures, v2: update reference

    MadNIS -- Neural Multi-Channel Importance Sampling

    Full text link
    Theory predictions for the LHC require precise numerical phase-space integration and generation of unweighted events. We combine machine-learned multi-channel weights with a normalizing flow for importance sampling, to improve classical methods for numerical integration. We develop an efficient bi-directional setup based on an invertible network, combining online and buffered training for potentially expensive integrands. We illustrate our method for the Drell-Yan process with an additional narrow resonance

    The Flow of LHC Events - Generative models for LHC simulations and inference

    No full text
    Generative neural networks have various applications in LHC physics, for both fast simulations and precise inference. We first show that normalizing flows can be used to generate reconstruction-level events with percent-level precision. To estimate their generation uncertainties, we apply Bayesian neural networks. Further, we study the weight distribution from a classifier network which can be used for reweighting, as a performance metric and as a diagnostic tool. Next, we introduce the MadNIS framework for neural importance sampling. It improves classical methods for phase-space integration and sampling using adaptive multi-channel weights and normalizing flows as learnable channel mappings. We show that it leads to significant performance gains for several realistic LHC processes implemented in the MadGraph event generator. Generative networks can also improve analyses by maximizing the amount of extracted information. The matrix element method uses the full kinematic information, making it the tool of choice for small event numbers. It relies on a transfer function to model the shower, detector and acceptance effects. We show how three networks can be used to encode these effects, and for efficient phase-space integration. We use normalizing flows for fast sampling, diffusion models for precise density estimation, and solve jet combinatorics with a transformer

    How to Understand Limitations of Generative Networks - Generator datasets

    No full text
    <p>These are the datasets used in "How to Understand Limitations of Generative Networks". The preprint is available on arXiv at: https://arxiv.org/abs/2305.16774</p><p>Four files are used in Sec.5 "Event generation":</p><ul><li>ev_truth.h5 is the true reconstruction-level sample used during training;</li><li>ev_masspeak.h5 is the sample generated from the neural network of Sec. 5.1;</li><li>ev_inn.h5 is the state-of-the-art sample used in Sec. 5.2;</li><li>ev_binn.h5 collects the Bayesian samples of Sec. 5.3;</li></ul><p>The remaining three files are used for the "Calorimeter simulation" section.<br>These are named according to the particle originating the shower: "calo_eplus.hdf5" for positrons, "calo_gamma.hdf5" for photons, and "calo_piplus.hdf5" for pions. Each sample contains 100k showers. </p><p> </p&gt

    QCD or what?

    No full text
    Autoencoder networks, trained only on QCD jets, can be used to search for anomalies in jet-substructure. We show how, based either on images or on 4-vectors, they identify jets from decays of arbitrary heavy resonances. To control the backgrounds and the underlying systematics we can de-correlate the jet mass using an adversarial network. Such an adversarial autoencoder allows for a general and at the same time easily controllable search for new physics. Ideally, it can be trained and applied to data in the same phase space region, allowing us to efficiently search for new physics using un-supervised learning

    Precision-Machine Learning for the Matrix Element Method

    No full text
    International audienceThe matrix element method is the LHC inference method of choice for limited statistics. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase this setup for the CP-phase of the top Yukawa coupling in associated Higgs and single-top production

    Two invertible networks for the matrix element method

    No full text
    The matrix element method is widely considered the ultimate LHC inference tool for small event numbers. We show how a combination of two conditional generative neural networks encodes the QCD radiation and detector effects without any simplifying assumptions, while keeping the computation of likelihoods for individual events numerically efficient. We illustrate our approach for the CP-violating phase of the top Yukawa coupling in associated Higgs and single-top production. Currently, the limiting factor for the precision of our approach is jet combinatorics

    How to understand limitations of generative networks

    No full text
    Well-trained classifiers and their complete weight distributions provide us with a well-motivated and practicable method to test generative networks in particle physics. We illustrate their benefits for distribution-shifted jets, calorimeter showers, and reconstruction-level events. In all cases, the classifier weights make for a powerful test of the generative network, identify potential problems in the density estimation, relate them to the underlying physics, and tie in with a comprehensive precision and uncertainty treatment for generative networks
    corecore