20 research outputs found

    Firmware implementation of a recurrent neural network for the computation of the energy deposited in the liquid argon calorimeter of the ATLAS experiment

    Full text link
    The ATLAS experiment measures the properties of particles that are products of proton-proton collisions at the LHC. The ATLAS detector will undergo a major upgrade before the high luminosity phase of the LHC. The ATLAS liquid argon calorimeter measures the energy of particles interacting electromagnetically in the detector. The readout electronics of this calorimeter will be replaced during the aforementioned ATLAS upgrade. The new electronic boards will be based on state-of-the-art field-programmable gate arrays (FPGA) from Intel allowing the implementation of neural networks embedded in firmware. Neural networks have been shown to outperform the current optimal filtering algorithms used to compute the energy deposited in the calorimeter. This article presents the implementation of a recurrent neural network (RNN) allowing the reconstruction of the energy deposited in the calorimeter on Stratix 10 FPGAs. The implementation in high level synthesis (HLS) language allowed fast prototyping but fell short of meeting the stringent requirements in terms of resource usage and latency. Further optimisations in Very High-Speed Integrated Circuit Hardware Description Language (VHDL) allowed fulfilment of the requirements of processing 384 channels per FPGA with a latency smaller than 125 ns.Comment: 13 pages, 8 figure

    Développement d'algorithmes d'intelligence artificielle adaptés au traitement des big data dans des systèmes embarqués (FPGA) de déclenchement et d'acquisition de données au LHC

    No full text
    The Standard Model of particle physics is completed after the discovery of the Higgs boson at the Large Hadron Collider (LHC) in 2012. Discovering new physics beyond the Standard Model and probing the newly discovered Higgs sector are two of the most important goals of current and future particle physics experiments. In 2026-2029, the LHC will undergo an upgrade to increase its instantaneous luminosity by a factor of 5-7 with respect to its design luminosity. This upgrade will mark the beginning of the High Luminosity LHC (HL-LHC) era. Concurrently, the ATLAS and the CMS detectors will be upgraded to cope with the increased LHC luminosity. The ATLAS liquid argon (LAr) calorimeter measures the energies of particles produced in proton-proton collisions at the LHC. The LAr calorimeter readout electronics will be replaced to prepare it for the HL-LHC era. This will allow it to run at a higher trigger rate and have increased granularity at the trigger level. The energy deposited in the LAr calorimeter is reconstructed out of the electronic pulse signal using the optimal filtering algorithm. The energy is computed in real-time using custom electronicboards based on Field Programmable Gate Arrays (FPGAs). FPGAs are chosen due to their ability to process large amounts of data with low latency which is a requirement of the ATLAS trigger system. The increased LHC luminosity will lead to a high rate of simultaneous multiple proton-proton collisions (pileup) that results in a significant degradation of the energy resolution computed by the optimal filtering algorithm. Computing the energy with high precision is of utmost importance to achieve the physics goals of the ATLAS experiment at the HL-LHC. Recent advances in deep learning coupled with the increased computing capacity of FPGAs makes deep learning algorithms promising tools to replace the existing optimal filtering algorithms. In this dissertation, recurrent neural networks (RNNs) are developed to compute the energy deposited in the LAr calorimeter. Long-Short-Term-Memory (LSTM) and simple RNNs are investigated. The parameters of these neural networks are studied in detail to optimize the performance. The developed networks are shown to outperform the optimal filtering algorithms. The models are further optimized to be deployed on FPGAs by quantization and compression methods which are shown to reduce the resource consumption with minimal effect on the performance. The LAr calorimeter is composed of 182000 individual channels for which we need to compute the deposited energies. Training 182000 different neural networks is not practically feasible. A new method based on unsupervised learning is developed to form clusters of channels with similar electronic pulse signals, which allows the use of the same neural network for all channels in one cluster. This method reduces the number of needed neural networks to about 100 making it possible to cover the full detector with these advanced algorithms.Le modèle standard de la physique des particules est achevé après la découverte du boson de Higgs au Grand collisionneur de hadrons (LHC) en 2012. La découverte d’une nouvelle physique au-delà du modèle standard et sonder le secteur du boson de Higgs sont deux des objectifs les plus importants des expériences actuelles et futures de physique des particules. En 2026-2029, le LHC fera l’objet d’une mise à niveau afin d’augmenter sa luminosité instantanée d’un facteur de 5 à 7 par rapport à sa luminosité nominale. Cette mise à niveau marquera le début de l’ère du LHC à haute luminosité (HL-LHC). Parallèlement, les détecteurs ATLAS et CMS seront mis à niveau pour faire face à l’augmentation de la luminosité du LHC. Le calorimètre à argon liquide (LAr) d’ATLAS mesure l’énergie des particules produites dans les collisions proton-proton au LHC. L’électronique de lecture du calorimètre LAr sera remplacée pour le préparer à l’ère HL-LHC. Cela lui permettra de fonctionner à un taux de déclenchement plus élevé et d’avoir une granularité accrue au niveau du déclenchement. L’énergie déposée dans le calorimètre LAr est reconstruite à partir du signal d’impulsion électronique à l’aide d’un algorithme de filtrage optimal. L’énergie est calculée en temps réel à l’aide de cartes électroniques basées sur des circuits logiques programmables (FPGA). Les FPGA ont été choisis en raison de leur capacité à traiter de grandes quantités de données avec une faible latence, ce qui est une exigence du système de déclenchement d’ATLAS. L’augmentation de la luminosité du LHC entraînera un taux élevé de collisions multiples et simultanées entre protons et protons (pileup), ce qui se traduira par une dégradation significative de la résolution en énergie calculée par les algorithmes de filtrage optimal. Il est extrêmement important de calculer l’énergie avec une grande précision afin d’atteindre les objectifs de physique de l’expérience ATLAS au HL-LHC. Les récentes avancées dans le domaine de l’apprentissage profond, associées à la capacité de calcul accrue des FPGA, font des algorithmes d’apprentissage profond des outils prometteurs pour remplacer les algorithmes de filtrage optimal existants. Dans cette thèse, des réseaux neuronaux récurrents (RNN) sont développés pour calculer l’énergie déposée dans le calorimètre LAr. Des réseaux récurrents à mémoire court et long terme (LSTM) et des réseaux neuronaux récurrents simples sont étudiés. Les paramètres de ces réseaux sont étudiés en détail afin d’optimiser les performances. Les réseaux développés se révèlent plus performants que les algorithmes de filtrage optimaux. Ces réseaux sont en outre optimisés pour être déployés sur des FPGA par des méthodes de quantification et de compression qui permettent de réduire l’utilisation de ressources avec un effet minimal sur les performances. Le calorimètre LAr est composé de 182000 canaux individuels pour lesquels nous devons calculer les énergies déposées. L’entraînement de 182000 réseaux neuronaux différents n’est pas réalisable en pratique. Une nouvelle méthode basée sur l’apprentissage non supervisé est développée pour former des groupes de canaux avec des signauxd’impulsion électroniques similaires, ce qui permet d’utiliser le même réseau neuronal pour tous les canaux dans un même groupe. Cette méthode réduit le nombre de réseaux neuronaux nécessaires à environ 100, ce qui permet de couvrir l’ensemble du détecteur avec ces algorithmes avancés

    Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs

    No full text
    International audienceWithin the Phase-II upgrade of the LHC, the readout electronics of the ATLAS LAr Calorimetersare prepared for high luminosity operation expecting a pile-up of up to 200 simultaneous proton-proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions areoverlapping, which increases the difficulty of energy reconstruction. Real-time processing ofdigitized pulses sampled at 40 MHz is performed using FPGAs.To cope with the signal pile-up, new machine learning approaches are explored: convolutional andrecurrent neural networks outperform the optimal signal filter currently used, both in assignmentof the reconstructed energy to the correct bunch crossing and in energy resolution.Very good agreement between neural network implementations in FPGA and software basedcalculations is observed. The FPGA resource usage, the latency and the operation frequency areanalyzed. Latest performance results and experience with prototype implementations are analyzedand are found to fit the requirements for the Phase-II upgrade

    Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs

    No full text
    Within the Phase-II upgrade of the LHC, the readout electronics of the ATLAS LAr Calorimeters is prepared for high luminosity operation expecting a pile-up of up to 200 simultaneous pp interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction. Real-time processing of digitized pulses sampled at 40 MHz is thus performed using FPGAs. To cope with the signal pile-up, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct bunch crossing and in energy resolution. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The FPGA resource usage, the latency and the operation frequency are analysed. Latest performance results and experience with prototype implementations will be reported

    Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs

    No full text
    Within the Phase-II upgrade of the LHC, the readout electronics of the ATLAS LAr Calorimeters is prepared for high luminosity operation expecting a pile-up of up to 200 simultaneous pp interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction. Real-time processing of digitized pulses sampled at 40 MHz is thus performed using FPGAs. To cope with the signal pile-up, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct bunch crossing and in energy resolution. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The FPGA resource usage, the latency and the operation frequency are analysed. Latest performance results and experience with prototype implementations will be reported

    Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC

    No full text
    The Standard Model of particle physics is completed after the discovery of the Higgs boson at the Large Hadron Collider (LHC) in 2012. Discovering new physics beyond the Standard Model and probing the newly discovered Higgs sector are two of the most important goals of current and future particle physics experiments. In 2026-2029, the LHC will undergo an upgrade to increase its instantaneous luminosity by a factor of 5-7 with respect to its design luminosity. This upgrade will mark the beginning of the High Luminosity LHC (HL-LHC) era. Concurrently, the ATLAS and the CMS detectors will be upgraded to cope with the increased LHC luminosity. The ATLAS liquid argon (LAr) calorimeter measures the energies of particles produced in proton-proton collisions at the LHC. The LAr calorimeter readout electronics will be replaced to prepare it for the HL-LHC era. This will allow it to run at a higher trigger rate and have increased granularity at the trigger level. The energy deposited in the LAr calorimeter is reconstructed out of the electronic pulse signal using the optimal filtering algorithm. The energy is computed in real-time using custom electronic boards based on Field Programmable Gate Arrays (FPGAs). FPGAs are chosen due to their ability to process large amounts of data with low latency which is a requirement of the ATLAS trigger system. The increased LHC luminosity will lead to a high rate of simultaneous multiple proton-proton collisions (pileup) that results in a significant degradation of the energy resolution computed by the optimal filtering algorithm. Computing the energy with high precision is of utmost importance to achieve the physics goals of the ATLAS experiment at the HL-LHC. Recent advances in deep learning coupled with the increased computing capacity of FPGAs makes deep learning algorithms promising tools to replace the existing optimal filtering algorithms. In this dissertation, recurrent neural networks (RNNs) are developed to compute the energy deposited in the LAr calorimeter. Long-ShortTerm-Memory (LSTM) and simple RNNs are investigated. The parameters of these neural networks are studied in detail to optimize the performance. The developed networks are shown to outperform the optimal filtering algorithms. The models are further optimized to be deployed on FPGAs by quantization and compression methods which are shown to reduce the resource consumption with minimal effect on the performance. The LAr calorimeter is composed of 182000 individual channels for which we need to compute the deposited energies. Training 182000 different neural networks is not practically feasible. A new method based on unsupervised learning is developed to form clusters of channels with similar electronic pulse signals, which allows the use of the same neural network for all channels in one cluster. This method reduces the number of needed neural networks to about 100 making it possible to cover the full detector with these advanced algorithms

    Firmware implementation of a recurrent neural network for the computation of the energy deposited in the liquid argon calorimeter of the ATLAS experiment

    No full text
    International audienceThe ATLAS experiment measures the properties of particles that are products of proton-proton collisions at the LHC. The ATLAS detector will undergo a major upgrade before the high luminosity phase of the LHC. The ATLAS liquid argon calorimeter measures the energy of particles interacting electromagnetically in the detector. The readout electronics of this calorimeter will be replaced during the aforementioned ATLAS upgrade. The new electronic boards will be based on state-of-the-art field-programmable gate arrays (FPGA) from Intel allowing the implementation of neural networks embedded in firmware. Neural networks have been shown to outperform the current optimal filtering algorithms used to compute the energy deposited in the calorimeter. This article presents the implementation of a recurrent neural network (RNN) allowing the reconstruction of the energy deposited in the calorimeter on Stratix 10 FPGAs. The implementation in high level synthesis (HLS) language allowed fast prototyping but fell short of meeting the stringent requirements in terms of resource usage and latency. Further optimisations in Very High-Speed Integrated Circuit Hardware Description Language (VHDL) allowed fulfilment of the requirements of processing 384 channels per FPGA with a latency smaller than 125 ns

    Artificial Neural Networks on FPGAs for Real-Time Energy Reconstruction of the ATLAS LAr Calorimeters

    No full text
    International audienceThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC

    Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs

    No full text
    The Phase-II upgrade of the LHC will increase its instantaneous luminosity by a factor of 7 leading to the High Luminosity LHC (HL-LHC). At the HL-LHC, the number of proton-proton collisions in one bunch crossing (called pileup) increases significantly, putting more stringent requirements on the LHC detectors electronics and real-time data processing capabilities. The ATLAS Liquid Argon (LAr) calorimeter measures the energy of particles produced in LHC collisions. This calorimeter has also trigger capabilities to identify interesting events. In order to enhance the ATLAS detector physics discovery potential, in the blurred environment created by the pileup, an excellent resolution of the deposited energy and an accurate detection of the deposited time is crucial. The computation of the deposited energy is performed in real-time using dedicated data acquisition electronic boards based on FPGAs. FPGAs are chosen for their capacity to treat large amount of data with very low latency. The computation of the deposited energy is currently done using optimal filtering algorithms that assume a nominal pulse shape of the electronic signal. These filter algorithms are adapted to the ideal situation with very limited pileup and no timing overlap of the electronic pulses in the detector. However, with the increased luminosity and pileup, the performance of the filter algorithms decreases significantly and no further extension nor tuning of these algorithms could recover the lost performance. The back-end electronic boards for the Phase-II upgrade of the LAr calorimeter will use the next high-end generation of INTEL FPGAs with increased processing power and memory. This is a unique opportunity to develop the necessary tools, enabling the use of more complex algorithms on these boards. We developed several neural networks (NNs) with significant performance improvements with respect to the optimal filtering algorithms. The main challenge is to efficiently implement these NNs into the dedicated data acquisition electronics. Special effort was dedicated to minimising the needed computational power while optimising the NNs architectures. Five NN algorithms based on CNN, RNN, and LSTM architectures will be presented. The improvement of the energy resolution and the accuracy on the deposited time compared to the legacy filter algorithms, especially for overlapping pulses, will be discussed. The implementation of these networks in firmware will be shown. Two implementation categories in VHDL and Quartus HLS code are considered. The implementation results on Stratix 10 INTEL FPGAs, including the resource usage, the latency, and operation frequency will be reported. Approximations for the firmware implementations, including the use of fixed-point precision arithmetic and lookup tables for activation functions, will be discussed. Implementations including time multiplexing to reduce resource usage will be presented. We will show that two of these NNs implementations are viable solutions that fit the stringent data processing requirements on the latency (O(100ns)) and bandwidth (O(1Tb/s) per FPGA) needed for the ATLAS detector operation
    corecore