961 research outputs found
Comparing and improving hybrid deep learning algorithms for identifying and locating primary vertices
Using deep neural networks to identify and locate proton-proton collision
points, or primary vertices, in LHCb has been studied for several years.
Preliminary results demonstrated the ability for a hybrid deep learning
algorithm to achieve similar or better physics performances compared to
standard heuristic approaches. The previously studied architectures relied
directly on hand-calculated Kernel Density Estimators (KDEs) as input features.
Calculating these KDEs was slow, making use of the DNN inference engines in the
experiment's real-time analysis (trigger) system problematic. Here we present
recent results from a high-performance hybrid deep learning algorithm that uses
track parameters as input features rather than KDEs, opening the path to
deployment in the real-time trigger system.Comment: Proceedings for the ACAT 2022 conferenc
Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC
We are studying the use of deep neural networks (DNNs) to identify and locate
primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work
focused on finding primary vertices in simulated LHCb data using a hybrid
approach that started with kernel density estimators (KDEs) derived
heuristically from the ensemble of charged track parameters and predicted
"target histogram" proxies, from which the actual PV positions are extracted.
We have recently demonstrated that using a UNet architecture performs
indistinguishably from a "flat" convolutional neural network model. We have
developed an "end-to-end" tracks-to-hist DNN that predicts target histograms
directly from track parameters using simulated LHCb data that provides better
performance (a lower false positive rate for the same high efficiency) than the
best KDE-to-hists model studied. This DNN also provides better efficiency than
the default heuristic algorithm for the same low false positive rate.
"Quantization" of this model, using FP16 rather than FP32 arithmetic, degrades
its performance minimally. Reducing the number of UNet channels degrades
performance more substantially. We have demonstrated that the KDE-to-hists
algorithm developed for LHCb data can be adapted to ATLAS and ACTS data using
two variations of the UNet architecture. Within ATLAS/ACTS, these algorithms
have been validated against the standard vertex finder algorithm. Both
variations produce PV-finding efficiencies similar to that of the standard
algorithm and vertex-vertex separation resolutions that are significantly
better
Progress in developing a hybrid deep learning algorithm for identifying and locating primary vertices
The locations of proton-proton collision points in LHC experiments are called
primary vertices (PVs). Preliminary results of a hybrid deep learning algorithm
for identifying and locating these, targeting the Run 3 incarnation of LHCb,
have been described at conferences in 2019 and 2020. In the past year we have
made significant progress in a variety of related areas. Using two newer Kernel
Density Estimators (KDEs) as input feature sets improves the fidelity of the
models, as does using full LHCb simulation rather than the "toy Monte Carlo"
originally (and still) used to develop models. We have also built a deep
learning model to calculate the KDEs from track information. Connecting a
tracks-to-KDE model to a KDE-to-hists model used to find PVs provides a
proof-of-concept that a single deep learning model can use track information to
find PVs with high efficiency and high fidelity. We have studied a variety of
models systematically to understand how variations in their architectures
affect performance. While the studies reported here are specific to the LHCb
geometry and operating conditions, the results suggest that the same approach
could be used by the ATLAS and CMS experiments
Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC
We are studying the use of deep neural networks (DNNs) to identify and locate primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work focused on finding primary vertices in simulated LHCb data using a hybrid approach that started with kernel density estimators (KDEs) derived heuristically from the ensemble of charged track parameters and predicted “target histogram” proxies, from which the actual PV positions are extracted. We have recently demonstrated that using a UNet architecture performs indistinguishably from a “flat” convolutional neural network model. We have developed an “end-to-end” tracks-to-hist DNN that predicts target histograms directly from track parameters using simulated LHCb data that provides better performance (a lower false positive rate for the same high efficiency) than the best KDE-tohists model studied. This DNN also provides better efficiency than the default heuristic algorithm for the same low false positive rate. “Quantization” of this model, using FP16 rather than FP32 arithmetic, degrades its performance minimally. Reducing the number of UNet channels degrades performance more substantially. We have demonstrated that the KDE-to-hists algorithm developed for LHCb data can be adapted to ATLAS and ACTS data using two variations of the UNet architecture. Within ATLAS/ACTS, these algorithms have been validated against the standard vertex finder algorithm. Both variations produce PVfinding efficiencies similar to that of the standard algorithm and vertex-vertex separation resolutions that are significantly better
First Observation of CP Violation in B0->D(*)CP h0 Decays by a Combined Time-Dependent Analysis of BaBar and Belle Data
We report a measurement of the time-dependent CP asymmetry of B0->D(*)CP h0
decays, where the light neutral hadron h0 is a pi0, eta or omega meson, and the
neutral D meson is reconstructed in the CP eigenstates K+ K-, K0S pi0 or K0S
omega. The measurement is performed combining the final data samples collected
at the Y(4S) resonance by the BaBar and Belle experiments at the
asymmetric-energy B factories PEP-II at SLAC and KEKB at KEK, respectively. The
data samples contain ( 471 +/- 3 ) x 10^6 BB pairs recorded by the BaBar
detector and ( 772 +/- 11 ) x 10^6, BB pairs recorded by the Belle detector. We
measure the CP asymmetry parameters -eta_f S = +0.66 +/- 0.10 (stat.) +/- 0.06
(syst.) and C = -0.02 +/- 0.07 (stat.) +/- 0.03 (syst.). These results
correspond to the first observation of CP violation in B0->D(*)CP h0 decays.
The hypothesis of no mixing-induced CP violation is excluded in these decays at
the level of 5.4 standard deviations.Comment: 9 pages, 2 figures, submitted to Physical Review Letter
Analysis of the common genetic component of large-vessel vasculitides through a meta- Immunochip strategy
Giant cell arteritis (GCA) and Takayasu's arteritis (TAK) are major forms of large-vessel vasculitis (LVV) that share clinical features. To evaluate their genetic similarities, we analysed Immunochip genotyping data from 1,434 LVV patients and 3,814 unaffected controls. Genetic pleiotropy was also estimated. The HLA region harboured the main disease-specific associations. GCA was mostly associated with class II genes (HLA-DRB1/HLA-DQA1) whereas TAK was mostly associated with class I genes (HLA-B/MICA). Both the statistical significance and effect size of the HLA signals were considerably reduced in the cross-disease meta-analysis in comparison with the analysis of GCA and TAK separately. Consequently, no significant genetic correlation between these two diseases was observed when HLA variants were tested. Outside the HLA region, only one polymorphism located nearby the IL12B gene surpassed the study-wide significance threshold in the meta-analysis of the discovery datasets (rs755374, P?=?7.54E-07; ORGCA?=?1.19, ORTAK?=?1.50). This marker was confirmed as novel GCA risk factor using four additional cohorts (PGCA?=?5.52E-04, ORGCA?=?1.16). Taken together, our results provide evidence of strong genetic differences between GCA and TAK in the HLA. Outside this region, common susceptibility factors were suggested, especially within the IL12B locus
Les droits disciplinaires des fonctions publiques : « unification », « harmonisation » ou « distanciation ». A propos de la loi du 26 avril 2016 relative à la déontologie et aux droits et obligations des fonctionnaires
The production of tt‾ , W+bb‾ and W+cc‾ is studied in the forward region of proton–proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98±0.02 fb−1 . The W bosons are reconstructed in the decays W→ℓν , where ℓ denotes muon or electron, while the b and c quarks are reconstructed as jets. All measured cross-sections are in agreement with next-to-leading-order Standard Model predictions.The production of , and is studied in the forward region of proton-proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98 0.02 \mbox{fb}^{-1}. The bosons are reconstructed in the decays , where denotes muon or electron, while the and quarks are reconstructed as jets. All measured cross-sections are in agreement with next-to-leading-order Standard Model predictions
Measurement of in with decays by a combined time-dependent Dalitz plot analysis of BaBar and Belle data
We report measurements of and from a
time-dependent Dalitz plot analysis of with decays, where the light unflavored and neutral
hadron is a , , or meson. The analysis is
performed with a combination of the final data sets of the \babar\ and Belle
experiments containing and
pairs collected at the resonance at the
asymmetric-energy B factories PEP-II at SLAC and KEKB at KEK, respectively. We
measure and . The result for the direct
measurement of the angle is . The last quoted
uncertainties are due to the composition of the decay amplitude model, which is newly established by a Dalitz plot
amplitude analysis of a high-statistics data sample
as part of this analysis. We find the first evidence for at the
level of standard deviations. The measurement excludes the trigonometric
multifold solution at the level of
standard deviations and therefore resolves an ambiguity in the
determination of the apex of the CKM Unitarity Triangle. The hypothesis of
is ruled out at the level of standard deviations, and
thus CP violation is observed in decays.Comment: To be submitted to Physical Review
Physics case for an LHCb Upgrade II - Opportunities in flavour physics, and beyond, in the HL-LHC era
The LHCb Upgrade II will fully exploit the flavour-physics opportunities of the HL-LHC, and study additional physics topics that take advantage of the forward acceptance of the LHCb spectrometer. The LHCb Upgrade I will begin operation in 2020. Consolidation will occur, and modest enhancements of the Upgrade I detector will be installed, in Long Shutdown 3 of the LHC (2025) and these are discussed here. The main Upgrade II detector will be installed in long shutdown 4 of the LHC (2030) and will build on the strengths of the current LHCb experiment and the Upgrade I. It will operate at a luminosity up to 2×1034
cm−2s−1, ten times that of the Upgrade I detector. New detector components will improve the intrinsic performance of the experiment in certain key areas. An Expression Of Interest proposing Upgrade II was submitted in February 2017. The physics case for the Upgrade II is presented here in more depth. CP-violating phases will be measured with precisions unattainable at any other envisaged facility. The experiment will probe b → sl+l−and b → dl+l− transitions in both muon and electron decays in modes not accessible at Upgrade I. Minimal flavour violation will be tested with a precision measurement of the ratio of B(B0 → μ+μ−)/B(Bs → μ+μ−). Probing charm CP violation at the 10−5 level may result in its long sought discovery. Major advances in hadron spectroscopy will be possible, which will be powerful probes of low energy QCD. Upgrade II potentially will have the highest sensitivity of all the LHC experiments on the Higgs to charm-quark couplings. Generically, the new physics mass scale probed, for fixed couplings, will almost double compared with the pre-HL-LHC era; this extended reach for flavour physics is similar to that which would be achieved by the HE-LHC proposal for the energy frontier
LHCb upgrade software and computing : technical design report
This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis
- …