2,772 research outputs found
Density Estimation Trees as fast non-parametric modelling tools
Density Estimation Trees (DETs) are decision trees trained on a multivariate
dataset to estimate its probability density function. While not competitive
with kernel techniques in terms of accuracy, they are incredibly fast,
embarrassingly parallel and relatively small when stored to disk. These
properties make DETs appealing in the resource-expensive horizon of the LHC
data analysis. Possible applications may include selection optimization, fast
simulation and fast detector calibration. In this contribution I describe the
algorithm, made available to the HEP community in a RooFit implementation. A
set of applications under discussion within the LHCb Collaboration are also
briefly illustrated.Comment: Presented at the Workshop on Advanced Computing and Analysis
Techniques (ACAT2016
Measurement of the B+c meson lifetime using the B+c → J/ψμ+νX decays
Using 2fb − 1 of data collected in 2012 at √s = 8TeV, the LHCb Collaboration measured the lifetime of the B+c meson studying the semileptonic decays B+c → J/ψμ+νX. The result, τB+c = 509 ± 8 ± 12 fs, is the world’s best measurement of the B+c lifetime
Fast Data-Driven Simulation of Cherenkov Detectors Using Generative Adversarial Networks
The increasing luminosities of future Large Hadron Collider runs and next
generation of collider experiments will require an unprecedented amount of
simulated events to be produced. Such large scale productions are extremely
demanding in terms of computing resources. Thus new approaches to event
generation and simulation of detector responses are needed. In LHCb, the
accurate simulation of Cherenkov detectors takes a sizeable fraction of CPU
time. An alternative approach is described here, when one generates high-level
reconstructed observables using a generative neural network to bypass low level
details. This network is trained to reproduce the particle species likelihood
function values based on the track kinematic parameters and detector occupancy.
The fast simulation is trained using real data samples collected by LHCb during
run 2. We demonstrate that this approach provides high-fidelity results.Comment: Proceedings for 19th International Workshop on Advanced Computing and
Analysis Techniques in Physics Research. (Fixed typos and added one missing
reference in the revised version.
Towards Reliable Neural Generative Modeling of Detectors
The increasing luminosities of future data taking at Large Hadron Collider
and next generation collider experiments require an unprecedented amount of
simulated events to be produced. Such large scale productions demand a
significant amount of valuable computing resources. This brings a demand to use
new approaches to event generation and simulation of detector responses. In
this paper, we discuss the application of generative adversarial networks
(GANs) to the simulation of the LHCb experiment events. We emphasize main
pitfalls in the application of GANs and study the systematic effects in detail.
The presented results are based on the Geant4 simulation of the LHCb Cherenkov
detector.Comment: 6 pages, 4 figure
Model independent measurements of Standard Model cross sections with Domain Adaptation
With the ever growing amount of data collected by the ATLAS and CMS
experiments at the CERN LHC, fiducial and differential measurements of the
Higgs boson production cross section have become important tools to test the
standard model predictions with an unprecedented level of precision, as well as
seeking deviations that can manifest the presence of physics beyond the
standard model. These measurements are in general designed for being easily
comparable to any present or future theoretical prediction, and to achieve this
goal it is important to keep the model dependence to a minimum. Nevertheless,
the reduction of the model dependence usually comes at the expense of the
measurement precision, preventing to exploit the full potential of the signal
extraction procedure. In this paper a novel methodology based on the machine
learning concept of domain adaptation is proposed, which allows using a complex
deep neural network in the signal extraction procedure while ensuring a minimal
dependence of the measurements on the theoretical modelling of the signal.Comment: 16 pages, 10 figure
Muon identification for LHCb Run 3
Muon identification is of paramount importance for the physics programme of
LHCb. In the upgrade phase, starting from Run 3 of the LHC, the trigger of the
experiment will be solely based on software. The luminosity increase to
cms will require an improvement of the muon
identification criteria, aiming at performances equal or better than those of
Run 2, but in a much more challenging environment. In this paper, two new muon
identification algorithms developed in view of the LHCb upgrade are presented,
and their performance in terms of signal efficiency versus background reduction
is shown
The LHCb ultra-fast simulation option, Lamarr: design and validation
Detailed detector simulation is the major consumer of CPU resources at LHCb,
having used more than 90% of the total computing budget during Run 2 of the
Large Hadron Collider at CERN. As data is collected by the upgraded LHCb
detector during Run 3 of the LHC, larger requests for simulated data samples
are necessary, and will far exceed the pledged resources of the experiment,
even with existing fast simulation options. An evolution of technologies and
techniques to produce simulated samples is mandatory to meet the upcoming needs
of analysis to interpret signal versus background and measure efficiencies. In
this context, we propose Lamarr, a Gaudi-based framework designed to offer the
fastest solution for the simulation of the LHCb detector. Lamarr consists of a
pipeline of modules parameterizing both the detector response and the
reconstruction algorithms of the LHCb experiment. Most of the parameterizations
are made of Deep Generative Models and Gradient Boosted Decision Trees trained
on simulated samples or alternatively, where possible, on real data. Embedding
Lamarr in the general LHCb Gauss Simulation framework allows combining its
execution with any of the available generators in a seamless way. Lamarr has
been validated by comparing key reconstructed quantities with Detailed
Simulation. Good agreement of the simulated distributions is obtained with
two-order-of-magnitude speed-up of the simulation phase.Comment: Under review in EPJ Web of Conferences (CHEP 2023
Intrinsic time resolution of 3D-trench silicon pixels for charged particle detection
In the last years, high-resolution time tagging has emerged as the tool to
tackle the problem of high-track density in the detectors of the next
generation of experiments at particle colliders. Time resolutions below 50ps
and event average repetition rates of tens of MHz on sensor pixels having a
pitch of 50m are typical minimum requirements. This poses an important
scientific and technological challenge on the development of particle sensors
and processing electronics. The TIMESPOT initiative (which stands for TIME and
SPace real-time Operating Tracker) aims at the development of a full prototype
detection system suitable for the particle trackers of the next-to-come
particle physics experiments. This paper describes the results obtained on the
first batch of TIMESPOT silicon sensors, based on a novel 3D MEMS (micro
electro-mechanical systems) design. Following this approach, the performance of
other ongoing silicon sensor developments has been matched and overcome, while
using a technology which is known to be robust against radiation degradation. A
time resolution of the order of 20ps has been measured at room temperature
suggesting also possible improvements after further optimisations of the
front-end electronics processing stage.Comment: This version was accepted to be published on JINST on 21/07/202
- …