885 research outputs found

    Observation of γγ → ττ in proton-proton collisions and limits on the anomalous electromagnetic moments of the τ lepton

    Get PDF
    The production of a pair of τ leptons via photon–photon fusion, γγ → ττ, is observed for the f irst time in proton–proton collisions, with a significance of 5.3 standard deviations. This observation is based on a data set recorded with the CMS detector at the LHC at a center-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 138 fb−1. Events with a pair of τ leptons produced via photon–photon fusion are selected by requiring them to be back-to-back in the azimuthal direction and to have a minimum number of charged hadrons associated with their production vertex. The τ leptons are reconstructed in their leptonic and hadronic decay modes. The measured fiducial cross section of γγ → ττ is σfid obs = 12.4+3.8 −3.1 fb. Constraints are set on the contributions to the anomalous magnetic moment (aτ) and electric dipole moments (dτ) of the τ lepton originating from potential effects of new physics on the γττ vertex: aτ = 0.0009+0.0032 −0.0031 and |dτ| < 2.9×10−17ecm (95% confidence level), consistent with the standard model

    Autoencoders for per-lumi-section data quality monitoring of the CMS detector

    No full text
    The monitoring of data quality is crucial both online, during the data taking, to promptly spot issues and act on them, and offline, to provide analysts with datasets that are cleaned against the occasional failures that may have crept in. Typically, data quality monitoring (DQM) is performed by shifters who look at a set of integrated quantities, compare them with reference histograms, and, based on their experience and training, assign quality flags. Recently CMS has developed the possibility of producing DQM plots per-lumisection, where a lumisection is a time unit corresponding to about 23 s of data taking. To analyze per-lumisection data, a manual approach would be prohibitive due to the high number of lumisections, therefore an automated one would be preferable. In this work, the first use in CMS of AutoEncoders to perform anomaly detection on per-lumisection data, specifically for quantities associated with jets and missing transverse energy, is presented. The technique developed allows the detection of anomalies at the level of individual lumisections, which might be overlooked when examining integrated quantities, and serves as a proof of concept regarding the efficacy of this and similar approaches

    Autoencoders for per-lumisection data quality monitoring at CMS

    No full text
    The monitoring of data quality is crucial both online, during the data taking, to promptly spot issues and act on them, and offline, to provide analysts with datasets that are cleaned against the occasional failures that may have crept in. Typically, data quality monitoring (DQM) is performed by \textit{shifters} who look at a set of integrated quantities, compare them with reference histograms, and, based on their experience and training, assign quality flags. Recently CMS has developed the possibility of producing DQM plots per-lumisection, where a lumisection is a time unit corresponding to about 23 s of data taking. To analyze per-lumisection data, a manual approach would be prohibitive due to the high number of lumisections, therefore an automated one would be preferable. In this work, the first use in CMS of AutoEncoders to perform anomaly detection on per-lumisection data, specifically for quantities associated with jets and missing transverse energy, is presented. The technique developed allows the detection of anomalies at the level of individual lumisections, which might be overlooked when examining integrated quantities, and serves as a proof of concept regarding the efficacy of this and similar approaches

    Search for scalar leptoquarks produced in lepton-quark collisions and coupled to τ \tau leptons

    No full text
    The first search for scalar leptoquarks produced in lepton-quark collisions and coupled to τ \tau leptons is presented. It is based on a set of proton-proton collision data recorded with the CMS detector at the LHC at a center-of-mass energy of 13 TeV corresponding to an integrated luminosity of 138 fb1 ^{-1} . The reconstructed final state consists of a jet, significant missing transverse momentum, and a τ \tau lepton reconstructed through its hadronic or leptonic decays. Limits are set on the product of the leptoquark production cross section and branching fraction and interpreted as exclusions in the plane of the leptoquark mass and the leptoquark-τ \tau -quark coupling strength.The first search for scalar leptoquarks produced in lepton-quark collisions and coupled to τ\tau leptons is presented. It is based on a set of proton-proton collision data recorded with the CMS detector at the LHC at a center-of-mass energy of 13 TeV corresponding to an integrated luminosity of 138 fb1^{-1}. The reconstructed final state consists of a jet, significant missing transverse momentum, and a τ\tau lepton reconstructed through its hadronic or leptonic decays. Limits are set on the product of the leptoquark production cross section and branching fraction and interpreted as exclusions in the plane of the leptoquark mass and the leptoquark-τ\tau-quark coupling strength

    Portable acceleration of CMS computing workflows with coprocessors as a service

    No full text
    International audienceComputing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors

    Performance of the CMS electromagnetic calorimeter in pp collisions at s= \sqrt{s}= 13 TeV

    No full text
    The operation and performance of the Compact Muon Solenoid (CMS) electromagnetic calorimeter (ECAL) are presented, based on data collected in pp collisions at s= \sqrt{s}= 13 TeV at the CERN LHC, in the years from 2015 to 2018 (LHC Run 2), corresponding to an integrated luminosity of 151 fb1 ^{-1} . The CMS ECAL is a scintillating lead-tungstate crystal calorimeter, with a silicon strip preshower detector in the forward region that provides precise measurements of the energy and the time-of-arrival of electrons and photons. The successful operation of the ECAL is crucial for a broad range of physics goals, ranging from observing the Higgs boson and measuring its properties, to other standard model measurements and searches for new phenomena. Precise calibration, alignment, and monitoring of the ECAL response are important ingredients to achieve these goals. To face the challenges posed by the higher luminosity, which characterized the operation of the LHC in Run 2, the procedures established during the 2011-2012 run of the LHC have been revisited and new methods have been developed for the energy measurement and for the ECAL calibration. The energy resolution of the calorimeter, for electrons from Z boson decays reaching the ECAL without significant loss of energy by bremsstrahlung, was better than 1.8%, 3.0%, and 4.5% in the η |\eta| intervals [ [0.0, 0.8]] , [ [0.8, 1.5]] , [ [1.5, 2.5]] , respectively. This resulting performance is similar to that achieved during Run 1 in 2011-2012, in spite of the more severe running conditions.The operation and performance of the Compact Muon Solenoid (CMS) electromagnetic calorimeter (ECAL) are presented, based on data collected in pp collisions at s\sqrt{s} = 13 TeV at the CERN LHC, in the years from 2015 to 2018 (LHC Run 2), corresponding to an integrated luminosity of 151 fb1^{-1}. The CMS ECAL is a scintillating lead-tungstate crystal calorimeter, with a silicon strip preshower detector in the forward region that provides precise measurements of the energy and the time-of-arrival of electrons and photons. The successful operation of the ECAL is crucial for a broad range of physics goals, ranging from observing the Higgs boson and measuring its properties, to other standard model measurements and searches for new phenomena. Precise calibration, alignment, and monitoring of the ECAL response are important ingredients to achieve these goals. To face the challenges posed by the higher luminosity, which characterized the operation of the LHC in Run 2, the procedures established during the 2011-2012 run of the LHC have been revisited and new methods have been developed for the energy measurement and for the ECAL calibration. The energy resolution of the calorimeter, for electrons from Z boson decays reaching the ECAL without significant loss of energy by bremsstrahlung, was better than 1.8%, 3.0%, and 4.5% in the η\lvert\eta\rvert intervals [0.0,0.8], [0.8,1.5], [1.5, 2.5], respectively. This resulting performance is similar to that achieved during Run 1 in 2011-2012, in spite of the more severe running conditions

    Search for new physics with emerging jets in proton-proton collisions at s= \sqrt{s} = 13 TeV

    No full text
    A search for ``emerging jets'' produced in proton-proton collisions at a center-of-mass energy of 13 TeV is performed using data collected by the CMS experiment corresponding to an integrated luminosity of 138 fb1 ^{-1} . This search examines a hypothetical dark quantum chromodynamics (QCD) sector that couples to the standard model (SM) through a scalar mediator. The scalar mediator decays into an SM quark and a dark sector quark. As the dark sector quark showers and hadronizes, it produces long-lived dark mesons that subsequently decay into SM particles, resulting in a jet, known as an emerging jet, with multiple displaced vertices. This search looks for pair production of the scalar mediator at the LHC, which yields events with two SM jets and two emerging jets at leading order. The results are interpreted using two dark sector models with different flavor structures, and exclude mediator masses up to 1950 (1850) GeV for an unflavored (flavor-aligned) dark QCD model. The unflavored results surpass a previous search for emerging jets by setting the most stringent mediator mass exclusion limits to date, while the flavor-aligned results provide the first direct mediator mass exclusion limits to date.A search for ``emerging jets'' produced in proton-proton collisions at a center-of-mass energy of 13 TeV is performed using data collected by the CMS experiment corresponding to an integrated luminosity of 138 fb1^{-1}. This search examines a hypothetical dark quantum chromodynamics (QCD) sector that couples to the standard model (SM) through a scalar mediator. The scalar mediator decays into an SM quark and a dark sector quark. As the dark sector quark showers and hadronizes, it produces long-lived dark mesons that subsequently decay into SM particles, resulting in a jet, known as an emerging jet, with multiple displaced vertices. This search looks for pair production of the scalar mediator at the LHC, which yields events with two SM jets and two emerging jets at leading order. The results are interpreted using two dark sector models with different flavor structures, and exclude mediator masses up to 1950 (1850) GeV for an unflavored (flavor-aligned) dark QCD model. The unflavored results surpass a previous search for emerging jets by setting the most stringent mediator mass exclusion limits to date, while the flavor-aligned results provide the first direct mediator mass exclusion limits to date

    Elliptic anisotropy measurement of the f0_0(980) hadron in proton-lead collisions and evidence for its quark-antiquark composition

    No full text
    Despite the f0_0(980) hadron having been discovered half a century ago, the question about its quark content has not been settled: it might be an ordinary quark-antiquark (qqˉ \mathrm{q}\bar{\mathrm{q}} ) meson, a tetraquark (qqˉqqˉ \mathrm{q}\bar{\mathrm{q}}\mathrm{q}\bar{\mathrm{q}} ) exotic state, a kaon-antikaon (KK \mathrm{K}\overline{\mathrm{K}} ) molecule, or a quark-antiquark-gluon (qqˉg \mathrm{q}\bar{\mathrm{q}}\mathrm{g} ) hybrid. This paper reports strong evidence that the f0_0(980) state is an ordinary qqˉ \mathrm{q}\bar{\mathrm{q}} meson, inferred from the scaling of elliptic anisotropies (v2 v_{2} ) with the number of constituent quarks (nq n_{\mathrm{q}} ), as empirically established using conventional hadrons in relativistic heavy ion collisions. The f0_0(980) state is reconstructed via its dominant decay channel f0(980)π+π \mathrm{f}_0(980) \to \pi^{+}\pi^{-} , in proton-lead collisions recorded by the CMS experiment at the LHC, and its v2 v_{2} is measured as a function of transverse momentum (pT p_{\mathrm{T}} ). It is found that the nq= n_{\mathrm{q}} = 2 (qqˉ \mathrm{q}\bar{\mathrm{q}} state) hypothesis is favored over nq= n_{\mathrm{q}} = 4 (qqˉqqˉ \mathrm{q} \bar{\mathrm{q}} \mathrm{q} \bar{\mathrm{q}} or KK \mathrm{K}\overline{\mathrm{K}} states) by 7.7, 6.3, or 3.1 standard deviations in the pT< p_{\mathrm{T}} < 10, 8, or 6 GeV/cc ranges, respectively, and over nq= n_{\mathrm{q}}= 3 (qqˉg \mathrm{q}\bar{\mathrm{q}}\mathrm{g} hybrid state) by 3.5 standard deviations in the pT< p_{\mathrm{T}} < 8 GeV/cc range. This result represents the first determination of the quark content of the f0_0(980) state, made possible by using a novel approach, and paves the way for similar studies of other exotic hadron candidates.Despite the f0_0(980) hadron having been discovered half a century ago, the question about its quark content has not been settled: it might be an ordinary quark-antiquark (qqˉ\mathrm{q\bar{q}}) meson, a tetraquark (qqˉqqˉ\mathrm{q\bar{q}q\bar{q}}) exotic state, a kaon-antikaon (KKˉ\mathrm{K\bar{K}}) molecule, or a quark-antiquark-gluon (qqˉg\mathrm{q\bar{q}g}) hybrid. This paper reports strong evidence that the f0_0(980) state is an ordinary qqˉ\mathrm{q\bar{q}} meson, inferred from the scaling of elliptic anisotropies (v2v_2) with the number of constituent quarks (nqn_\mathrm{q}), as empirically established using conventional hadrons in relativistic heavy ion collisions. The f0_0(980) state is reconstructed via its dominant decay channel f0_0(980) \toπ+π\pi^+\pi^-, in proton-lead collisions recorded by the CMS experiment at the LHC, and its v2v_2 is measured as a function of transverse momentum (pTp_\mathrm{T}). It is found that the nqn_q = 2 (qqˉ\mathrm{q\bar{q}} state) hypothesis is favored over nqn_q = 4 (qqˉqqˉ\mathrm{q\bar{q}q\bar{q}} or KKˉ\mathrm{K\bar{K}} states) by 7.7, 6.3, or 3.1 standard deviations in the pTp_\mathrm{T}<\lt 10, 8, or 6 GeV/cc ranges, respectively, and over nqn_\mathrm{q} = 3 (qqˉg\mathrm{q\bar{q}g} hybrid state) by 3.5 standard deviations in the pTp_\mathrm{T}<\lt 8 GeV/cc range. This result represents the first determination of the quark content of the f0_0(980) state, made possible by using a novel approach, and paves the way for similar studies of other exotic hadron candidates

    Overview of high-density QCD studies with the CMS experiment at the LHC

    No full text
    The heavy ion (HI) physics program has proven to be an essential part of the overall physics program at the Large Hadron Collider at CERN. Its main purpose has been to provide a detailed characterization of the quark-gluon plasma (QGP), a deconfined state of quarks and gluons created in high-energy nucleus-nucleus collisions. From the start of the LHC HI program with lead-lead collisions, the CMS Collaboration has performed measurements using additional data sets in different center-of-mass energies with xenon-xenon, proton-lead, and proton-proton collisions. A broad collection of observables related to high-density quantum chromodynamics (QCD), precision quantum electrodynamics (QED), and even novel searches of phenomena beyond the standard model (BSM) have been studied. Major advances toward understanding the macroscopic and microscopic QGP properties were achieved at the highest temperature reached in the laboratory and for vanishingly small values of the baryon chemical potential. This article summarizes key QCD, QED, as well as BSM physics, results of the CMS HI program for the LHC Runs 1 (2010-2013) and 2 (2015-2018). It reviews findings on the partonic content of nuclei and properties of the QGP and describes the surprising QGP-like effects in collision systems smaller than lead-lead or xenon-xenon. In addition, it outlines the scientific case of using ultrarelativistic HI collisions in the coming decades to characterize the QGP with unparalleled precision and to probe novel fundamental physics phenomena.The heavy ion (HI) physics program has proven to be an essential part of the overall physics program at the Large Hadron Collider at CERN. Its main purpose has been to provide a detailed characterization of the quark-gluon plasma (QGP), a deconfined state of quarks and gluons created in high-energy nucleus-nucleus collisions. From the start of the LHC HI program with lead-lead collisions, the CMS Collaboration has performed measurements using additional data sets in different center-of-mass energies with xenon-xenon, proton-lead, and proton-proton collisions. A broad collection of observables related to high-density quantum chromodynamics (QCD), precision quantum electrodynamics (QED), and even novel searches of phenomena beyond the standard model (BSM) have been studied. Major advances toward understanding the macroscopic and microscopic QGP properties were achieved at the highest temperature reached in the laboratory and for vanishingly small values of the baryon chemical potential. This article summarizes key QCD, QED, as well as BSM physics, results of the CMS HI program for the LHC Runs 1 (2010-2013) and 2 (2015-2018). It reviews findings on the partonic content of nuclei and properties of the QGP and describes the surprising QGP-like effects in collision systems smaller than lead-lead or xenon-xenon. In addition, it outlines the scientific case of using ultrarelativistic HI collisions in the coming decades to characterize the QGP with unparalleled precision and to probe novel fundamental physics phenomena
    corecore