602 research outputs found
Studies on with Large Dilepton Invariant-Mass, Scalable Pythonic Fitting, and Online Event Interpretation with GNNs at LHCb
The Standard Model of particle physics is the established theory describing nature's phenomena involving the most fundamental particles. However, the model has inherent shortcomings, and recent measurements indicate tensions with its predictions, suggesting the existence of a more fundamental theory. Experimental particle physics aims to test the Standard Model predictions with increasing precision in order to constrain or confirm physics beyond the Standard Model.
A large part of this thesis is dedicated to the first measurement of the ratio of branching fractions of the decays and , referred to as , in the high dilepton invariant mass region. The presented analysis uses the full dataset of proton-proton collisions collected by the LHCb experiment in the years 2011-2018, corresponding to an integrated luminosity of 9~. The final result for is still blinded.
The sensitivity of the developed analysis is estimated to be and . Applying all analysis steps to a control channel, where the value of is known, successfully recovers the correct value.
In addition to the precision measurement of at a high dilepton invariant mass, this thesis contains two more technical topics. First, an algorithm that selects particles in an event in the LHCb detector by performing a full event interpretation, referred to as \textsc{DFEI}. This tool is based on multiple Graph Neural Networks and aims to cope with the increase in luminosity in current and future upgrades of the LHCb detector. Comparisons with the current approach show at least similar, sometimes better, performance with respect to decay reconstruction and selection using charged particles. The efficiency is mostly independent of the luminosity, which is crucial for future upgrades.
Second, a \textsc{Python} package for likelihood model fitting called \textsc{zfit}. The increasing popularity of the \textsc{Python} programming language in High Energy Physics creates a need for a flexible, modular, and performant fitting library. The \textsc{zfit} package is well integrated into the \textsc{Python} ecosystem, highly customizable and fast thanks to its computational backend \textsc{TensorFlow}
Studies on with Large Dilepton Invariant-Mass, Scalable Pythonic Fitting, and Online Event Interpretation with GNNs at LHCb
The Standard Model of particle physics is well established, yet recently
showed tensions with experimental observations. A large part of this thesis is
dedicated to the first measurement of the ratio of branching fractions of the
decays and ,
referred to as , in the high dilepton invariant-mass region. The
presented analysis uses the full dataset of proton-proton collisions collected
by the LHCb experiment in the years 2011-2018, corresponding to an integrated
luminosity of 9 . The final result for is still blinded. The
sensitivity of the developed analysis is estimated to be
and . In addition to the precision measurement of at a high dilepton
invariant mass, this thesis contains two more technical topics. First, an
algorithm that selects particles in an event in the LHCb detector by performing
a full event interpretation, referred to as DFEI. This tool is based on
multiple Graph Neural Networks and aims to cope with the increase in luminosity
in current and future upgrades of the LHCb detector. Comparisons with the
current approach show at least similar, sometimes better, performance with
respect to decay reconstruction and selection using charged particles. The
efficiency is mostly independent of the luminosity, which is crucial for future
upgrades. Second, a Python package for likelihood model fitting called zfit.
The increasing popularity of the Python programming language in High Energy
Physics creates a need for a flexible, modular, and performant fitting library.
The zfit package is well integrated into the Python ecosystem, highly
customizable and fast thanks to its computational backend TensorFlow.Comment: PhD thesis, 269 pages, 130 figures, contains parts of
arxiv:1910.13429 and arxiv:2304.08610, future publication on RK comin
zfit: scalable pythonic fitting
Statistical modeling is a key element in many scientific fields and
especially in High-Energy Physics (HEP) analysis. The standard framework to
perform this task in HEP is the C++ ROOT/RooFit toolkit; with Python bindings
that are only loosely integrated into the scientific Python ecosystem. In this
paper, zfit, a new alternative to RooFit written in pure Python, is presented.
Most of all, zfit provides a well defined high-level API and workflow for
advanced model building and fitting, together with an implementation on top of
TensorFlow, allowing a transparent usage of CPUs and GPUs. It is designed to be
extendable in a very simple fashion, allowing the usage of cutting-edge
developments from the scientific Python ecosystem in a transparent way. The
main features of zfit are introduced, and its extension to data analysis,
especially in the context of HEP experiments, is discussed.Comment: 12 pages, 2 figure
GNN for Deep Full Event Interpretation and hierarchical reconstruction of heavy-hadron decays in proton-proton collisions
The LHCb experiment at the Large Hadron Collider (LHC) is designed to perform
high-precision measurements of heavy-hadron decays, which requires the
collection of large data samples and a good understanding and suppression of
multiple background sources. Both factors are challenged by a five-fold
increase in the average number of proton-proton collisions per bunch crossing,
corresponding to a change in the detector operation conditions for the LHCb
Upgrade I phase, recently started. A further ten-fold increase is expected in
the Upgrade II phase, planed for the next decade. The limits in the storage
capacity of the trigger will bring an inverse relation between the amount of
particles selected to be stored per event and the number of events that can be
recorded, and the background levels will raise due to the enlarged
combinatorics. To tackle both challenges, we propose a novel approach, never
attempted before in a hadronic collider: a Deep-learning based Full Event
Interpretation (DFEI), to perform the simultaneous identification, isolation
and hierarchical reconstruction of all the heavy-hadron decay chains per event.
This approach radically contrasts with the standard selection procedure used in
LHCb to identify heavy-hadron decays, that looks individually at sub-sets of
particles compatible with being products of specific decay types, disregarding
the contextual information from the rest of the event. We present the first
prototype for the DFEI algorithm, that leverages the power of Graph Neural
Networks (GNN). This paper describes the design and development of the
algorithm, and its performance in Upgrade I simulated conditions
Multidifferential study of identified charged hadron distributions in -tagged jets in proton-proton collisions at 13 TeV
Jet fragmentation functions are measured for the first time in proton-proton
collisions for charged pions, kaons, and protons within jets recoiling against
a boson. The charged-hadron distributions are studied longitudinally and
transversely to the jet direction for jets with transverse momentum 20 GeV and in the pseudorapidity range . The
data sample was collected with the LHCb experiment at a center-of-mass energy
of 13 TeV, corresponding to an integrated luminosity of 1.64 fb. Triple
differential distributions as a function of the hadron longitudinal momentum
fraction, hadron transverse momentum, and jet transverse momentum are also
measured for the first time. This helps constrain transverse-momentum-dependent
fragmentation functions. Differences in the shapes and magnitudes of the
measured distributions for the different hadron species provide insights into
the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any
supplementary material and additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb
public pages
Study of the decay
The decay is studied
in proton-proton collisions at a center-of-mass energy of TeV
using data corresponding to an integrated luminosity of 5
collected by the LHCb experiment. In the system, the
state observed at the BaBar and Belle experiments is
resolved into two narrower states, and ,
whose masses and widths are measured to be where the first uncertainties are statistical and the second
systematic. The results are consistent with a previous LHCb measurement using a
prompt sample. Evidence of a new
state is found with a local significance of , whose mass and width
are measured to be and , respectively. In addition, evidence of a new decay mode
is found with a significance of
. The relative branching fraction of with respect to the
decay is measured to be , where the first
uncertainty is statistical, the second systematic and the third originates from
the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb
public pages
Measurement of the ratios of branching fractions and
The ratios of branching fractions
and are measured, assuming isospin symmetry, using a
sample of proton-proton collision data corresponding to 3.0 fb of
integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The
tau lepton is identified in the decay mode
. The measured values are
and
, where the first uncertainty is
statistical and the second is systematic. The correlation between these
measurements is . Results are consistent with the current average
of these quantities and are at a combined 1.9 standard deviations from the
predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb
public pages
raredecay: MVA and reweighting with Machine Learning
Machine Learning based Analysis Framework for physics on top of RE
- âŠ