22 research outputs found
Multi-particle reconstruction with dynamic graph neural networks
The task of finding the incident particles from the sensor deposits they
leave on particle detectors is called event or particle reconstruction. The sensor
deposits can be represented generically as a point cloud, with each point
corresponding to three spatial dimensions of the sensor location, the energy
deposit, and occasionally, also the time of the deposit. As particle detectors
become increasingly more complex, ever-more sophisticated methods are
needed to perform particle reconstruction. An example is the ongoing High
Luminosity (HL) upgrade of the Large Hadron Collider (HL-LHC). The HLHLC
is the most significant milestone in experimental particle physics and
aims to deliver an order of magnitude more data rate compared to the current
LHC. As part of the upgrade, the endcap calorimeters of the Compact
Muon Solenoid (CMS) experiment â one of the two largest and generalpurpose
detectors at the LHC â will be replaced by the radiation-hard High
Granularity Calorimeter (HGCAL).
The HGCAL will contain ⌠6 million sensors to achieve the spatial
resolution required for reconstructing individual particles in HL-LHC conditions.
It has an irregular geometry due to its hexagonal sensors, with sizes
varying across the longitudinal and transverse axes. Further, it generates
sparse data as less than 10% of the sensors register positive energy. Reconstruction
in this environment, where highly irregular patterns of hits are left
by the particles, is an unprecedentedly intractable and compute-intensive
pattern recognition problem. This motivates the use of parallelisationfriendly
deep learning approaches. More traditional deep learning methods,
however, are not feasible for the HGCAL because a regular grid-like
structure is assumed in those approaches.
In this thesis, a reconstruction algorithm based on a dynamic graph
neural network called GravNet is presented. The network is paired with a
segmentation technique, Object Condensation, to first perform point-cloud
segmentation on the detector hits. The property-prediction capability of
the Object Condensation approach is then used for energy regression of the reconstructed particles. A range of experiments are conducted to show that
this method works well in conditions expected in the HGCAL i.e., with
200 simultaneous proton-proton collisions. Parallel algorithms based on
Nvidia CUDA are also presented to address the computational challenges
of the graph neural network discussed in this thesis. With the optimisations,
reconstruction can be performed by this method in approximately 2 seconds
which is suitable considering the computational constraints at the LHC.
The presented method is the first-ever example of deep learning based
end-to-end calorimetric reconstruction in high occupancy environments. This
sets the stage for the next era of particle reconstruction, which is expected
to be end-to-end. While this thesis is focused on the HGCAL, the method
discussed is general and can be extended not only to other calorimeters but
also to other tasks such as track reconstruction
25th International Conference on Computing in High Energy & Nuclear Physics
The high-luminosity upgrade of the LHC will come with unprecedented physics and computing challenges. One of these challenges is the accurate reconstruction of particles in events with up to 200 simultaneous proton-proton interactions. The planned CMS High Granularity Calorimeter offers fine spatial resolution for this purpose, with more than 6 million channels, but also poses unique challenges to reconstruction algorithms aiming to reconstruct individual particle showers. In this contribution, we propose an end-to-end machine-learning method that performs clustering, classification, and energy and position regression in one step while staying within memory and computational constraints. We employ GravNet, a graph neural network, and an object condensation loss function to achieve this task. Additionally, we propose a method to relate truth showers to reconstructed showers by maximising the energy weighted intersection over union using maximal weight matching. Our results show the efficiency of our method and highlight a promising research direction to be investigated further
Multi-Particle Reconstruction with Dynamic Graph Neural Networks
The task of finding the incident particles from the sensor deposits they leave on particle detectors is called event or particle reconstruction. The sensor deposits can be represented generically as a point cloud, with each point corresponding to three spatial dimensions of the sensor location, the energy deposit, and occasionally, also the time of the deposit. As particle detectors become increasingly more complex, ever-more sophisticated methods are needed to perform particle reconstruction. An example is the ongoing High Luminosity (HL) upgrade of the Large Hadron Collider (HL-LHC). The HL-HLC is the most significant milestone in experimental particle physics and aims to deliver an order of magnitude more data rate compared to the current LHC. As part of the upgrade, the endcap calorimeters of the Compact Muon Solenoid (CMS) experiment -- one of the two largest and general-purpose detectors at the LHC -- will be replaced by the radiation-hard High Granularity Calorimeter (HGCAL). The HGCAL will contain million sensors to achieve the spatial resolution required for reconstructing individual particles in HL-LHC conditions. It has an irregular geometry due to its hexagonal sensors, with sizes varying across the longitudinal and transverse axes. Further, it generates sparse data as less than % of the sensors register positive energy. Reconstruction in this environment, where highly irregular patterns of hits are left by the particles, is an unprecedentedly intractable and compute-intensive pattern recognition problem. This motivates the use of parallelisation-friendly deep learning approaches. More traditional deep learning methods, however, are not feasible for the HGCAL because a regular grid-like structure is assumed in those approaches. In this thesis, a reconstruction algorithm based on a dynamic graph neural network called GravNet is presented. The network is paired with a segmentation technique, Object Condensation, to first perform point-cloud segmentation on the detector hits. The property-prediction capability of the Object Condensation approach is then used for energy regression of the reconstructed particles. A range of experiments are conducted to show that this method works well in conditions expected in the HGCAL i.e., with simultaneous proton-proton collisions. Parallel algorithms based on Nvidia CUDA are also presented to address the computational challenges of the graph neural network discussed in this thesis. With the optimisations, reconstruction can be performed by this method in approximately seconds which is suitable considering the computational constraints at the LHC. The presented method is the first-ever example of deep learning based end-to-end calorimetric reconstruction in high occupancy environments. This sets the stage for the next era of particle reconstruction, which is expected to be end-to-end. While this thesis is focused on the HGCAL, the method discussed is general and can be extended not only to other calorimeters but also to other tasks such as track reconstruction
Learning representations of irregular particle-detector geometry with distance-weighted graph networks
We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms provide an interesting alternative to existing methods, offering equally performing or less resource-demanding solutions with less underlying assumptions on the detector geometry and, consequently, the possibility to generalize to other detectors.We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms provide an interesting alternative to existing methods, offering equally performing or less resource-demanding solutions with less underlying assumptions on the detector geometry and, consequently, the possibility to generalize to other detectors
Multi-particle reconstruction in the High Granularity Calorimeter using object condensation and graph neural networks
The high-luminosity upgrade of the LHC will come with unprecedented physics and computing challenges. One of these challenges is the accurate reconstruction of particles in events with up to 200 simultaneous protonproton interactions. The planned CMS High Granularity Calorimeter offers fine spatial resolution for this purpose, with more than 6 million channels, but also poses unique challenges to reconstruction algorithms aiming to reconstruct individual particle showers. In this contribution, we propose an end-to-end machine-learning method that performs clustering, classification, and energy and position regression in one step while staying within memory and computational constraints. We employ GravNet, a graph neural network, and an object condensation loss function to achieve this task. Additionally, we propose a method to relate truth showers to reconstructed showers by maximising the energy weighted intersection over union using maximal weight matching. Our results show the efficiency of our method and highlight a promising research direction to be investigated further
GNN-based end-to-end reconstruction in the CMS Phase 2 High-Granularity Calorimeter
We present the current stage of research progress towards a one-pass, completely Machine Learning (ML) based imaging calorimeter reconstruction. The model used is based on Graph Neural Networks (GNNs) and directly analyzes the hits in each HGCAL endcap. The ML algorithm is trained to predict clusters of hits originating from the same incident particle by labeling the hits with the same cluster index. We impose simple criteria to assess whether the hits associated as a cluster by the prediction are matched to those hits resulting from any particular individual incident particles. The algorithm is studied by simulating two tau leptons in each of the two HGCAL endcaps, where each tau may decay according to its measured standard model branching probabilities. The simulation includes the material interaction of the tau decay products which may create additional particles incident upon the calorimeter. Using this varied multiparticle environment we can investigate the application of this reconstruction technique and begin to characterize energy containment and performance.We present the current stage of research progress towards a one-pass, completely Machine Learning (ML) based imaging calorimeter reconstruction. The model used is based on Graph Neural Networks (GNNs) and directly analyzes the hits in each HGCAL endcap. The ML algorithm is trained to predict clusters of hits originating from the same incident particle by labeling the hits with the same cluster index. We impose simple criteria to assess whether the hits associated as a cluster by the prediction are matched to those hits resulting from any particular individual incident particles. The algorithm is studied by simulating two tau leptons in each of the two HGCAL endcaps, where each tau may decay according to its measured standard model branching probabilities. The simulation includes the material interaction of the tau decay products which may create additional particles incident upon the calorimeter. Using this varied multiparticle environment we can investigate the application of this reconstruction technique and begin to characterize energy containment and performance
Opening Remarks
Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one ÎŒs on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage
Recommended from our members
Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics.
Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one ÎŒs on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage
Search for time-dependent violation in decays
International audienceA measurement of time-dependent violation in decays using a collision data sample collected by the LHCb experiment in 2012 and from 2015 to 2018, corresponding to an integrated luminosity of 7.7, is presented. The initial flavour of each candidate is determined from the charge of the pion produced in the decay. The decay is used as a control channel to validate the measurement procedure. The gradient of the time-dependent asymmetry, , in decays is measured to be \begin{equation*} \Delta Y = (-1.3 \pm 6.3 \pm 2.4) \times 10^{-4}, \end{equation*} where the first uncertainty is statistical and the second is systematic, which is compatible with conservation
Tracking of charged particles with nanosecond lifetimes at LHCb
A method is presented to reconstruct charged particles with lifetimes between 10 ps and 10 ns, which considers a combination of their decay products and the partial tracks created by the initial charged particle. Using the baryon as a benchmark, the method is demonstrated with simulated events and proton-proton collision data at TeV, corresponding to an integrated luminosity of 2.0 fb collected with the LHCb detector in 2018. Significant improvements in the angular resolution and the signal purity are obtained. The method is implemented as part of the LHCb Run 3 event trigger in a set of requirements to select detached hyperons. This is the first demonstration of the applicability of this approach at the LHC, and the first to show its scaling with instantaneous luminosity