493 research outputs found

    The DAQ and clock distribution system of CMS MIP Timing Detector

    No full text
    International audienceThe Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive Phase-II upgrade program to cope with the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector is designed to measure minimum ionizing particles (MIPs) with a time resolution of 30–60 ps during the entire HL-LHC phase. A common data acquisition (DAQ) system will collect data from readout chips, reconstruct timing information, and send data to the event builder. The MIP timing detector (MTD) DAQ system is built around the state-of-the-art ATCA-form-factor Serenity board with two high-speed FPGAs. The precision clock is synchronized to the LHC collision rate of 40 MHz and is received at the subsystem and transmitted to the detector via high-speed data links. The detector system with a full-readout chain has been tested using prototypes of the DAQ, on-detector electronics, and sensors, showing that the system can successfully achieve timing resolution below 30 ps

    Deep learning techniques for energy clustering in the CMS electromagnetic calorimeter

    No full text
    International audienceThe reconstruction of electrons and photons in CMS depends on the topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). The currently used algorithm cannot account for the energy deposits coming from the pileup (secondary collisions) efficiently. The performance of this algorithm is expected to degrade during the LHC Run 3 because of the larger average pileup level and the increasing level of noise due to the aging of the ECAL detector. In this paper, we explore new techniques for energy reconstruction in ECAL using state-of-the-art machine learning algorithms like graph neural networks and self-attention modules

    Reconstruction et analyse d'événements de l'experience CMS avec intelligence artificielle

    No full text
    Recent developments in computer hardware and deep-learning algorithms, combined with large datasets, lead to impressive progress in artificial intelligence (AI) in the past few years. Although only marginally studied in high-energy particle collisions, deep-learning algorithms already demonstrated the ability to perform particle and event classification, estimation of kinematic variables, and anomaly detection. Those abilities are extremely useful in the analysis of the unprecedented amount of proton-proton collisions expected in the next running phases of the Large Hadron Collider (LHC) at CERN. The CMS detector will undergo major upgrades to deal with the increasing number of additional collisions per LHC bunch crossing (pileup), benefiting from more finely segmented detectors and precise timing information, and one of the central subjects of this thesis is dedicated to the development of versatile data acquisition software for the new MIP Timing Detector. In addition to hardware improvements, the success of these upgrades will heavily depend on fast, robust, and adaptive event processing and analysis techniques. Consequently, the majority of the work performed for this thesis is dedicated to developing and testing new AI-based reconstruction methods for the electromagnetic calorimeter of the CMS experiment. It covers two steps of the full chain of electromagnetic object reconstruction. The first one is the evaluation of the kinematic variables from the energy signatures left by standalone particles in the calorimeter. The second one combines these standalone particles into a unified object known as SuperCluster, which is crucial for accurate particle energy reconstruction. Both of the tasks are developed separately, and for each of them, a dedicated AI model is created and its performance is assessed and compared with the current traditional approach.Les récents développements en matière de matériel informatique et d'algorithmes d'apprentissage profond, combinés à de vastes ensembles de données, ont permis d'accomplir des progrès impressionnants dans le domaine de l'intelligence artificielle (IA) au cours des dernières années. Bien qu'ils n'aient été que peu étudiés dans les collisions de particules à haute énergie, les algorithmes d'apprentissage profond ont déjà démontré leur capacité à classer les particules et les événements, à estimer les variables cinématiques et à détecter des anomalies. Ces capacités sont extrêmement utiles pour l'analyse de la quantité sans précédent de collisions proton-proton attendue lors des prochaines phases de fonctionnement du Grand Collisionneur de Hadrons (LHC) au CERN. Le détecteur CMS fera l'objet d'améliorations majeures pour faire face au nombre croissant de collisions supplémentaires par croisement du LHC ("pileup"), en bénéficiant de détecteurs plus finement segmentés et d'informations temporelles précises. L'un des sujets centraux de cette thèse est consacré au développement d'un logiciel d'acquisition de données polyvalent pour le nouveau détecteur temporel de temps de passage des particules chargées (MIP timing detector ou MTD). Outre les améliorations matérielles, le succès de ces mises à niveau dépendra fortement de techniques de traitement et d'analyse des événements qui devront être rapides, robustes, performantes et adaptatives. Par conséquent, la majorité des travaux réalisés dans le cadre de cette thèse sont consacrés au développement et au test de nouvelles méthodes de reconstruction basées sur l'IA pour le calorimètre électromagnétique de l'expérience CMS. Ces travaux couvrent deux étapes de la chaîne complète de reconstruction des objets électromagnétiques. La première est l'évaluation des variables cinématiques à partir des signatures énergétiques laissées par des particules uniques atteignant le calorimètre. La seconde combine ces particules en un amas, appelé SuperCluster, qui permet une reconstruction précise de l'énergie des particules électromagnétiques issues de la collision. Les deux tâches sont développées séparément, et pour chacune d'entre elles, un modèle d'IA dédié est créé et ses performances sont évaluées et comparées à l'approche traditionnelle actuelle

    Machine Learning Techniques for Calorimetry

    No full text
    International audienceThe Compact Muon Solenoid (CMS) is one of the general purpose detectors at the CERN Large Hadron Collider (LHC), where the products of proton–proton collisions at the center of mass energy up to 13.6 TeV are reconstructed. The electromagnetic calorimeter (ECAL) is one of the crucial components of the CMS since it reconstructs the energies and positions of electrons and photons. Even though several Machine Learning (ML) algorithms have been already used for calorimetry, with the constant advancement of the field, more and more sophisticated techniques have become available, which can be beneficial for object reconstruction with calorimeters. In this paper, we present two novel ML algorithms for object reconstruction with the ECAL that are based on graph neural networks (GNNs). The new approaches show significant improvements compared to the current algorithms used in CMS

    Reconstruction of electromagnetic showers in calorimeters using Deep Learning

    No full text
    International audienceThe precise reconstruction of properties of photons and electrons in modern high energy physics detectors, such as the CMS or Atlas experiments, plays a crucial role in numerous physics results. Conventional geometrical algorithms are used to reconstruct the energy and position of these particles from the showers they induce in the electromagnetic calorimeter. Despite their accuracy and efficiency, these methods still suffer from several limitations, such as low-energy background and limited capacity to reconstruct close-by particles. This paper introduces an innovative machine-learning technique to measure the energy and position of photons and electrons based on convolutional and graph neural networks, taking the geometry of the CMS electromagnetic calorimeter as an example. The developed network demonstrates a significant improvement in resolution both for photon energy and position predictions compared to the algorithm used in CMS. Notably, one of the main advantages of this new approach is its ability to better distinguish between multiple close-by electromagnetic showers

    Measurement of the double-differential inclusive jet cross section in proton-proton collisions at s\sqrt{s} = 5.02 TeV

    No full text
    International audienceThe inclusive jet cross section is measured as a function of jet transverse momentum pTp_\mathrm{T} and rapidity yy. The measurement is performed using proton-proton collision data at s\sqrt{s} = 5.02 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 27.4 pb1^{-1}. The jets are reconstructed with the anti-kTk_\mathrm{T} algorithm using a distance parameter of RR = 0.4, within the rapidity interval y\lvert y\rvert<\lt 2, and across the kinematic range 0.06 <\ltpTp_\mathrm{T}<\lt 1 TeV. The jet cross section is unfolded from detector to particle level using the determined jet response and resolution. The results are compared to predictions of perturbative quantum chromodynamics, calculated at both next-to-leading order and next-to-next-to-leading order. The predictions are corrected for nonperturbative effects, and presented for a variety of parton distribution functions and choices of the renormalization/factorization scales and the strong coupling αS\alpha_\mathrm{S}

    Measurement of the double-differential inclusive jet cross section in proton-proton collisions at s\sqrt{s} = 5.02 TeV

    No full text
    International audienceThe inclusive jet cross section is measured as a function of jet transverse momentum pTp_\mathrm{T} and rapidity yy. The measurement is performed using proton-proton collision data at s\sqrt{s} = 5.02 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 27.4 pb1^{-1}. The jets are reconstructed with the anti-kTk_\mathrm{T} algorithm using a distance parameter of RR = 0.4, within the rapidity interval y\lvert y\rvert<\lt 2, and across the kinematic range 0.06 <\ltpTp_\mathrm{T}<\lt 1 TeV. The jet cross section is unfolded from detector to particle level using the determined jet response and resolution. The results are compared to predictions of perturbative quantum chromodynamics, calculated at both next-to-leading order and next-to-next-to-leading order. The predictions are corrected for nonperturbative effects, and presented for a variety of parton distribution functions and choices of the renormalization/factorization scales and the strong coupling αS\alpha_\mathrm{S}

    Measurement of the double-differential inclusive jet cross section in proton-proton collisions at s\sqrt{s} = 5.02 TeV

    No full text
    International audienceThe inclusive jet cross section is measured as a function of jet transverse momentum pTp_\mathrm{T} and rapidity yy. The measurement is performed using proton-proton collision data at s\sqrt{s} = 5.02 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 27.4 pb1^{-1}. The jets are reconstructed with the anti-kTk_\mathrm{T} algorithm using a distance parameter of RR = 0.4, within the rapidity interval y\lvert y\rvert<\lt 2, and across the kinematic range 0.06 <\ltpTp_\mathrm{T}<\lt 1 TeV. The jet cross section is unfolded from detector to particle level using the determined jet response and resolution. The results are compared to predictions of perturbative quantum chromodynamics, calculated at both next-to-leading order and next-to-next-to-leading order. The predictions are corrected for nonperturbative effects, and presented for a variety of parton distribution functions and choices of the renormalization/factorization scales and the strong coupling αS\alpha_\mathrm{S}

    Measurement of the double-differential inclusive jet cross section in proton-proton collisions at s\sqrt{s} = 5.02 TeV

    No full text
    International audienceThe inclusive jet cross section is measured as a function of jet transverse momentum pTp_\mathrm{T} and rapidity yy. The measurement is performed using proton-proton collision data at s\sqrt{s} = 5.02 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 27.4 pb1^{-1}. The jets are reconstructed with the anti-kTk_\mathrm{T} algorithm using a distance parameter of RR = 0.4, within the rapidity interval y\lvert y\rvert<\lt 2, and across the kinematic range 0.06 <\ltpTp_\mathrm{T}<\lt 1 TeV. The jet cross section is unfolded from detector to particle level using the determined jet response and resolution. The results are compared to predictions of perturbative quantum chromodynamics, calculated at both next-to-leading order and next-to-next-to-leading order. The predictions are corrected for nonperturbative effects, and presented for a variety of parton distribution functions and choices of the renormalization/factorization scales and the strong coupling αS\alpha_\mathrm{S}
    corecore