994 research outputs found
Lamarr: LHCb ultra-fast simulation based on machine learning models deployed within Gauss
About 90% of the computing resources available to the LHCb experiment has
been spent to produce simulated data samples for Run 2 of the Large Hadron
Collider at CERN. The upgraded LHCb detector will be able to collect larger
data samples, requiring many more simulated events to analyze the data to be
collected in Run 3. Simulation is a key necessity of analysis to interpret
signal, reject background and measure efficiencies. The needed simulation will
far exceed the pledged resources, requiring an evolution in technologies and
techniques to produce these simulated data samples. In this contribution, we
discuss Lamarr, a Gaudi-based framework to speed-up the simulation production
parameterizing both the detector response and the reconstruction algorithms of
the LHCb experiment. Deep Generative Models powered by several algorithms and
strategies are employed to effectively parameterize the high-level response of
the single components of the LHCb detector, encoding within neural networks the
experimental errors and uncertainties introduced in the detection and
reconstruction phases. Where possible, models are trained directly on real
data, statistically subtracting any background components by applying
appropriate reweighing procedures. Embedding Lamarr in the general LHCb Gauss
Simulation framework allows to combine its execution with any of the available
generators in a seamless way. The resulting software package enables a
simulation process independent of the detailed simulation used to date.Comment: Under review in Journal of Physics: Conference Series (ACAT 2022
Hyperparameter Optimization as a Service on INFN Cloud
The simplest and often most effective way of parallelizing the training of complex machine learning models is to execute several training instances on multiple machines, scanning the hyperparameter space to optimize the underlying statistical model and the learning procedure. Often, such a meta-learning procedure is limited by the ability of accessing securely a common database organizing the knowledge of the previous and ongoing trials. Exploiting opportunistic GPUs provided in different environments represents a further challenge when designing such optimization campaigns. In this contribution, we discuss how a set of REST APIs can be used to access a dedicated service based on INFN Cloud to monitor and coordinate multiple training instances, with gradient-less optimization techniques, via simple HTTP requests. The service, called Hopaas (Hyperparameter OPtimization As A Service), is made of a web interface and sets of APIs implemented with a FastAPI backend running through Uvicorn and NGINX in a virtual instance of INFN Cloud. The optimization algorithms are currently based on Bayesian techniques as provided by Optuna. A Python frontend is also made available for quick prototyping. We present applications to hyperparameter optimization campaigns performed by combining private, INFN Cloud, and CINECA resources. Such multi-node multi-site optimization studies have given a significant boost to the development of a set of parameterizations for the ultra-fast simulation of the LHCb experiment.To be published in Journal of Physics: Conference Series (ACAT 2022
Towards Reliable Neural Generative Modeling of Detectors
The increasing luminosities of future data taking at Large Hadron Collider
and next generation collider experiments require an unprecedented amount of
simulated events to be produced. Such large scale productions demand a
significant amount of valuable computing resources. This brings a demand to use
new approaches to event generation and simulation of detector responses. In
this paper, we discuss the application of generative adversarial networks
(GANs) to the simulation of the LHCb experiment events. We emphasize main
pitfalls in the application of GANs and study the systematic effects in detail.
The presented results are based on the Geant4 simulation of the LHCb Cherenkov
detector.Comment: 6 pages, 4 figure
DEVELOPING ARTIFICIAL INTELLIGENCE IN THE CLOUD: THE AI INFN PLATFORM
The INFN CSN5-funded project AI INFN (“Artificial Intelligence at INFN”) aims to promote ML and AI adoption within INFN by providing comprehensive support, including state of-the-art hardware and cloud-native solutions within INFN Cloud. This facilitates efficient sharing of hardware accelerators without hindering the institute’s diverse research activities. AI INFN advances from a Virtual-Machine-based model to a flexible Kubernetes-based platform, offering features such as JWT-based authentication, JupyterHub multitenant interface, distributed file system, customizable conda environments, and specialized monitoring and accounting systems. It also enables virtual nodes in the cluster, offloading computing payloads to remote resources through the Virtual Kubelet technology, with InterLink as provider. This setup can manage workflows across various providers and hardware types, which is crucial for scientific use cases that require dedicated infrastructures for different parts of the workload. Results of initial tests to validate its production applicability, emerging case studies and integration scenarios are presented
The LHCb ultra-fast simulation option, Lamarr: design and validation
Detailed detector simulation is the major consumer of CPU resources at LHCb,
having used more than 90% of the total computing budget during Run 2 of the
Large Hadron Collider at CERN. As data is collected by the upgraded LHCb
detector during Run 3 of the LHC, larger requests for simulated data samples
are necessary, and will far exceed the pledged resources of the experiment,
even with existing fast simulation options. An evolution of technologies and
techniques to produce simulated samples is mandatory to meet the upcoming needs
of analysis to interpret signal versus background and measure efficiencies. In
this context, we propose Lamarr, a Gaudi-based framework designed to offer the
fastest solution for the simulation of the LHCb detector. Lamarr consists of a
pipeline of modules parameterizing both the detector response and the
reconstruction algorithms of the LHCb experiment. Most of the parameterizations
are made of Deep Generative Models and Gradient Boosted Decision Trees trained
on simulated samples or alternatively, where possible, on real data. Embedding
Lamarr in the general LHCb Gauss Simulation framework allows combining its
execution with any of the available generators in a seamless way. Lamarr has
been validated by comparing key reconstructed quantities with Detailed
Simulation. Good agreement of the simulated distributions is obtained with
two-order-of-magnitude speed-up of the simulation phase.Comment: Under review in EPJ Web of Conferences (CHEP 2023
The LHCb ultra-fast simulation option, Lamarr design and validation
Detailed detector simulation is the major consumer of CPU resources at LHCb, having used more than 90% of the total computing budget during Run 2 of the Large Hadron Collider at CERN. As data is collected by the upgraded LHCb detector during Run 3 of the LHC, larger requests for simulated data samples are necessary, and will far exceed the pledged resources of the experiment, even with existing fast simulation options. The evolution of technologies and techniques for simulation production is then mandatory to meet the upcoming needs for the analysis of most of the data collected by the LHCb experiment. In this context, we propose Lamarr, a Gaudi-based framework designed to offer the fastest solution for the simulation of the LHCb detector. Lamarr consists of a pipeline of modules parameterizing both the detector response and the reconstruction algorithms of the LHCb experiment. Most of the parameterizations are made of Deep Generative Models and Gradient Boosted Decision Trees trained on simulated samples or alternatively, where possible, on real data. Embedding Lamarr in the general LHCb Gauss Simulation framework allows combining its execution with any of the available generators in a seamless way. Lamarr has been validated by comparing key reconstructed quantities with Detailed Simulation. Good agreement of the simulated distributions is obtained with two order of magnitude speed-up of the simulation phase
Multidifferential study of identified charged hadron distributions in -tagged jets in proton-proton collisions at 13 TeV
Jet fragmentation functions are measured for the first time in proton-proton
collisions for charged pions, kaons, and protons within jets recoiling against
a boson. The charged-hadron distributions are studied longitudinally and
transversely to the jet direction for jets with transverse momentum 20 GeV and in the pseudorapidity range . The
data sample was collected with the LHCb experiment at a center-of-mass energy
of 13 TeV, corresponding to an integrated luminosity of 1.64 fb. Triple
differential distributions as a function of the hadron longitudinal momentum
fraction, hadron transverse momentum, and jet transverse momentum are also
measured for the first time. This helps constrain transverse-momentum-dependent
fragmentation functions. Differences in the shapes and magnitudes of the
measured distributions for the different hadron species provide insights into
the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any
supplementary material and additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb
public pages
Study of the decay
The decay is studied
in proton-proton collisions at a center-of-mass energy of TeV
using data corresponding to an integrated luminosity of 5
collected by the LHCb experiment. In the system, the
state observed at the BaBar and Belle experiments is
resolved into two narrower states, and ,
whose masses and widths are measured to be where the first uncertainties are statistical and the second
systematic. The results are consistent with a previous LHCb measurement using a
prompt sample. Evidence of a new
state is found with a local significance of , whose mass and width
are measured to be and , respectively. In addition, evidence of a new decay mode
is found with a significance of
. The relative branching fraction of with respect to the
decay is measured to be , where the first
uncertainty is statistical, the second systematic and the third originates from
the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb
public pages
Measurement of the ratios of branching fractions and
The ratios of branching fractions
and are measured, assuming isospin symmetry, using a
sample of proton-proton collision data corresponding to 3.0 fb of
integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The
tau lepton is identified in the decay mode
. The measured values are
and
, where the first uncertainty is
statistical and the second is systematic. The correlation between these
measurements is . Results are consistent with the current average
of these quantities and are at a combined 1.9 standard deviations from the
predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb
public pages
First observation of the Λ0 b → Λ+ c D− s K+K− decay and search for pentaquarks in the Λ+ c D− s system
The Λ0
b → Λþ
c D−
s KþK− decay is observed for the first time using the data sample from proton-proton
collisions recorded at a center-of-mass energy of 13 TeV with the LHCb detector, corresponding to an
integrated luminosity of 6 fb−1. The ratio of branching fraction to that of Λ0
b → Λþ
c D−
s decays is
measured as 0.0141 0.0019 0.0012, where the first uncertainty is statistical and the second
systematic. A search for hidden-charm pentaquarks with strangeness is performed in the Λþ
c D−
s system.
No evidence is found, and upper limits on the production ratio of Pccs¯ ð4338Þ0 and Pccs¯ ð4459Þ0
pentaquarks relative to the Λþ
c D−
s final state are set at the 95% confidence level as 0.12 and 0.20,
respectively
- …
