36 research outputs found
Time dependent measurements of the CKM angle Gamma at LHCb
The startup of the LHC opens many new frontiers in precision flavour physics,
in particular expanding the field of precision time-dependent CP violation
measurements to the system. This contribution reviews the status of
time-dependent measurements of the CKM angle at the LHC's dedicated
flavour physics experiment, LHCb. Particular attention is given to the
measurement of from the decay mode \DsK, a theoretically clean and
precise method which is unique to LHCb. The performance of the LHCb detector
for this and related modes is reviewed in light of early data taking and found
to be close to the nominal simulation performance, and the outlook for these
measurements in 2011 is briefly touched on.Comment: Proceedings of CKM2010, the 6th International Workshop on the CKM
Unitarity Triangle, University of Warwick, UK, 6-10 September 201
Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree
High-level triggering is a vital component in many modern particle physics
experiments. This paper describes a modification to the standard boosted
decision tree (BDT) classifier, the so-called "bonsai" BDT, that has the
following important properties: it is more efficient than traditional cut-based
approaches; it is robust against detector instabilities, and it is very fast.
Thus, it is fit-for-purpose for the online running conditions faced by any
large-scale data acquisition system.Comment: 10 pages, 2 figure
Track reconstruction at LHC as a collaborative data challenge use case with RAMP
Charged particle track reconstruction is a major component of data-processing in high-energy physics experiments such as those at the Large Hadron Collider (LHC), and is foreseen to become more and more challenging with higher collision rates. A simplified two-dimensional version of the track reconstruction problem is set up on a collaborative platform, RAMP, in order for the developers to prototype and test new ideas. A small-scale competition was held during the Connecting The Dots / Intelligent Trackers 2017 (CTDWIT 2017) workshop. Despite the short time scale, a number of different approaches have been developed and compared along a single score metric, which was kept generic enough to accommodate a summarized performance in terms of both efficiency and fake rates
Flavour anomalies and status of indirect probes of the Standard Model
International audienceWith the discovery of the Higgs boson and consequent completion of the Standard Model, there is no fundamental principle which demands the existence of new particles below the Planck scale. Indirect precision measurements of the properties of existing particles are therefore more essential than ever to probe beyond the reach of direct discovery and guide the development of collider-based experiments. In this context, quark flavour physics is a uniquely rich laboratory for indirect precision tests of the Standard Model. I give a brief overview of some recent highlights from the field and look ahead to what the next generation of experiments and facilities might bring
Conceptualization, implementation, and commissioning of real-time analysis in the High Level Trigger of the LHCb experiment
LHCb is a general purpose forward detector located at the Large Hadron Collider (LHC) at CERN. Although initially optimized for the study of hadrons containing beauty quarks, the better than expected performance of the detector hardware and trigger system allowed LHCb to perform precise measurements of particle properties across a wide range of light hadron species produced at the LHC. The abundance of these light hadron species, and the large branching ratios of many theoretically interesting decay modes, have made it mandatory for LHCb to perform a large part of its data analysis within the experiment's trigger system, that is to say in real-time. This thesis describes the conceptualization, development, and commissioning of real-time analysis in LHCb, culminating in the proof-of-concept measurements produced with the first data collected in Run II of the LHC. It also describes mistakes made in these first real-time analyses, and their implication for the future of real-time analysis at LHCb and elsewhere
Graph Neural Network-Based Track Finding in the LHCb Vertex Detector
The next decade will see an order of magnitude increase in data collected by high-energy physics experiments, driven by the High-Luminosity LHC (HL-LHC). The reconstruction of charged particle trajectories (tracks) has always been a critical part of offline data processing pipelines. The complexity of HL-LHC data will however increasingly mandate track finding in all stages of an experiment's real-time processing. This paper presents a GNN-based track-finding pipeline tailored for the Run 3 LHCb experiment's vertex detector and benchmarks its physics performance and computational cost against existing classical algorithms on GPU architectures. A novelty of our work compared to existing GNN tracking pipelines is batched execution, in which the GPU evaluates the pipeline on hundreds of events in parallel. We evaluate the impact of neural-network quantisation on physics and computational performance, and comment on the outlook for GNN tracking algorithms for other parts of the LHCb track-finding pipeline
Graph Neural Network-Based Pipeline for Track Finding in the Velo at LHCb
https://indico.cern.ch/event/1252748/contributions/5521484/International audienceOver the next decade, increases in instantaneous luminosity and detector granularity will amplify the amount of data that has to be analysed by high-energy physics experiments, whether in real time or offline, by an order of magnitude.The reconstruction of charged particle tracks, which has always been a crucial element of offline data processing pipelines, must increasingly be deployed from the very first stages of the real time processing to enable experiments to achieve their physics goals.Graph Neural Networks (GNNs) have received a great deal of attention in the community because their computational complexity scales nearly linearly with the number of hits in the detector, unlike conventional algorithms which often scale quadratically or worse.This paper presents \texttt{ETX4VELO}, a GNN-based track-finding pipeline tailored for the Run 3 LHCb experiment's Vertex Locator, in the context of LHCb's fully GPU-based first-level trigger system, Allen. Currently implemented in Python, \texttt{ETX4VELO} offers the ability to reconstruct tracks with shared hits using a novel triplet-based method. When benchmarked against the traditional track-finding algorithm in Allen, this GNN-based approach not only matches but occasionally surpasses its physics performance. In particular, the fraction of fake tracks is reduced from over 2\% to below 1\% and the efficiency to reconstruct electrons is improved.While achieving comparable physics performance is a milestone, the immediate priority remains implementing \texttt{ETX4VELO} in Allen in order to determine and optimise its throughput, to meet the demands of this high-rate environment
Effective-field-theory arguments for pursuing lepton-flavor-violating K decays at LHCb
International audienceWe provide general effective-theory arguments relating present-day discrepancies in semileptonic B-meson decays to signals in kaon physics, in particular lepton-flavor violating ones of the kind Kâ(Ï)e±Όâ. We show that K-decay branching ratios of around 10-12â10-13 are possible, for effective-theory cutoffs around 5â15 TeV compatible with discrepancies in BâK(*)ΌΌ decays. We perform a feasibility study of the reach for such decays at LHCb, taking K+âÏ+Ό±eâ as a benchmark. In spite of the long lifetime of the K+ compared to the detector size, the huge statistics anticipated as well as the overall detector performance translate into encouraging results. These include the possibility to reach the 10-12 ballpark, and thereby significantly improve current limits. Our results advocate LHCâs high-luminosity Upgrade phase, and support analogous sensitivity studies at other facilities. Given the performance uncertainties inherent in the Upgrade phase, our conclusions are based on a range of assumptions we deem realistic on the particle identification performance as well as on the kinematic reconstruction thresholds for the signal candidates