943 research outputs found

    Development of a DAQ system for the CMS ECAL Phase 2 recommissioning

    Get PDF
    In view of the High-Luminosity phase of the Large Hadron Collider (LHC), in the barrel region of the CMS electromagnetic calorimeter (ECAL) the entire readout electronics will be replaced to cope with the more stringent requirements in terms of trigger latency, acquisition rate, and radiation and pileup resilience. The configuration sequence for the new on-detector electronics, involving both the improved very-front-end and front-end cards, is reported. The sequence of commands and parameters to load in the electronics is controlled by a software developed at CERN by the ECAL upgrade group and it will be the base on which to build the new data acquisition system (DAQ)

    Automating COVID responses: The impact of automated decision-making on the COVID-19 pandemic:Tracing The Tracers 2021 report

    Get PDF
    In an unprecedented global social experiment in health surveillance, a plethora of automated decisionmaking (ADM) systems — including systems based on artificial intelligence (AI) — were deployed during the COVID-19 pandemic. They were supposed to tackle fundamental public health issues. Nonetheless, too often, they were adopted with almost no transparency, no evidence of their efficacy, no adequate safeguards, and insufficient democratic debate. This report is the result of yearlong monitoring of the rollout and use of such systems, documented in our Tracing The Tracers project. In this final report, we will provide an early overall assessment of the main trends and developments concerning ADM-based responses to COVID-19

    Automating COVID responses: The impact of automated decision-making on the COVID-19 pandemic:Tracing The Tracers 2021 report

    Get PDF
    In an unprecedented global social experiment in health surveillance, a plethora of automated decisionmaking (ADM) systems — including systems based on artificial intelligence (AI) — were deployed during the COVID-19 pandemic. They were supposed to tackle fundamental public health issues. Nonetheless, too often, they were adopted with almost no transparency, no evidence of their efficacy, no adequate safeguards, and insufficient democratic debate. This report is the result of yearlong monitoring of the rollout and use of such systems, documented in our Tracing The Tracers project. In this final report, we will provide an early overall assessment of the main trends and developments concerning ADM-based responses to COVID-19

    Automating COVID responses: The impact of automated decision-making on the COVID-19 pandemic:Tracing The Tracers 2021 report

    Get PDF
    In an unprecedented global social experiment in health surveillance, a plethora of automated decisionmaking (ADM) systems — including systems based on artificial intelligence (AI) — were deployed during the COVID-19 pandemic. They were supposed to tackle fundamental public health issues. Nonetheless, too often, they were adopted with almost no transparency, no evidence of their efficacy, no adequate safeguards, and insufficient democratic debate. This report is the result of yearlong monitoring of the rollout and use of such systems, documented in our Tracing The Tracers project. In this final report, we will provide an early overall assessment of the main trends and developments concerning ADM-based responses to COVID-19

    Search for new physics in high-mass diphoton events from proton-proton collisions at √s = 13 TeV

    Get PDF
    Results are presented from a search for new physics in high-mass diphoton events from proton-proton collisions at √s = 13 TeV. The data set was collected in 2016–2018 with the CMS detector at the LHC and corresponds to an integrated luminosity of 138 fb−1. Events with a diphoton invariant mass greater than 500 GeV are considered. Two different techniques are used to predict the standard model backgrounds: parametric fits to the smoothly-falling background and a first-principles calculation of the standard model diphoton spectrum at next-to-next-to-leading order in perturbative quantum chromodynamics calculations. The first technique is sensitive to resonant excesses while the second technique can identify broad differences in the invariant mass shape. The data are used to constrain the production of heavy Higgs bosons, Randall-Sundrum gravitons, the large extra dimensions model of Arkani-Hamed, Dimopoulos, and Dvali (ADD), and the continuum clockwork mechanism. No statistically significant excess is observed. The present results are the strongest limits to date on ADD extra dimensions and RS gravitons with a coupling parameter greater than 0.1

    Measurement of the Higgs boson mass and width using the four-lepton final state in proton-proton collisions at √s =13 TeV

    Get PDF
    A measurement of the Higgs boson mass and width via its decay to two (Formula presented) bosons is presented. Proton-proton collision data collected by the CMS experiment, corresponding to an integrated luminosity of (Formula presented) at a center-of-mass energy of 13 TeV, is used. The invariant mass distribution of four leptons in the on-shell Higgs boson decay is used to measure its mass and constrain its width. This yields the most precise single measurement of the Higgs boson mass to date, (Formula presented), and an upper limit on the width (Formula presented) at 95% confidence level. A combination of the on- and off-shell Higgs boson production decaying to four leptons is used to determine the Higgs boson width, assuming that no new virtual particles affect the production, a premise that is tested by adding new heavy particles in the gluon fusion loop model. This result is combined with a previous CMS analysis of the off-shell Higgs boson production with decay to two leptons and two neutrinos, giving a measured Higgs boson width of (Formula presented), in agreement with the standard model prediction of 4.1 MeV. The strength of the off-shell Higgs boson production is also reported. The scenario of no off-shell Higgs boson production is excluded at a confidence level corresponding to 3.8 standard deviations

    Search for heavy neutral resonances decaying to tau lepton pairs in proton-proton collisions at s=13 TeV

    Get PDF
    A search for heavy neutral gauge bosons ((Formula presented)) decaying into a pair of tau leptons is performed in proton-proton collisions at (Formula presented) at the CERN LHC. The data were collected with the CMS detector and correspond to an integrated luminosity of (Formula presented). The observations are found to be in agreement with the expectation from standard model processes. Limits at 95% confidence level are set on the product of the (Formula presented) production cross section and its branching fraction to tau lepton pairs for a range of (Formula presented) boson masses. For a narrow resonance in the sequential standard model scenario, a (Formula presented) boson with a mass below 3.5 TeV is excluded. This is the most stringent limit to date from this type of search

    Elliptic anisotropy measurement of the f0(980) hadron in proton-lead collisions and evidence for its quark-antiquark composition

    Get PDF
    Despite the f0(980) hadron having been discovered half a century ago, the question about its quark content has not been settled: it might be an ordinary quark-antiquark (qq) meson, a tetraquark (qqqq) exotic state, a kaon-antikaon (KK) molecule, or a quark-antiquark-gluon (qqg) hybrid. This paper reports strong evidence that the f0(980) state is an ordinary qq meson, inferred from the scaling of elliptic anisotropies (v2) with the number of constituent quarks (nq), as empirically established using conventional hadrons in relativistic heavy ion collisions. The f0(980) state is reconstructed via its dominant decay channel f0(980) → π+π−, in proton-lead collisions recorded by the CMS experiment at the LHC, and its v2 is measured as a function of transverse momentum (pT). It is found that the nq = 2 (qq state) hypothesis is favored over nq = 4 (qqqq or KK states) by 7.7, 6.3, or 3.1 standard deviations in the pT < 10, 8, or 6 GeV/c ranges, respectively, and over nq = 3 (qqg hybrid state) by 3.5 standard deviations in the pT < 8 GeV/c range. This result represents the first determination of the quark content of the f0(980) state, made possible by using a novel approach, and paves the way for similar studies of other exotic hadron candidates

    Search for pair production of heavy particles decaying to a top quark and a gluon in the lepton+jets final state in proton–proton collisions at s=13TeV\sqrt{s}=13\,\text {Te}\hspace{-.08em}\text {V}

    Get PDF
    A search is presented for the pair production of new heavy resonances, each decaying into a top quark (t) or antiquark and a gluon (g). The analysis uses data recorded with the CMS detector from proton-proton collisions at a center-of-mass energy of 13 TeV at the LHC, corresponding to an integrated luminosity of 138 fb1^{-1}. Events with one muon or electron, multiple jets, and missing transverse momentum are selected. After using a deep neural network to enrich the data sample with signal-like events, distributions in the scalar sum of the transverse momenta of all reconstructed objects are analyzed in the search for a signal. No significant deviations from the standard model prediction are found. Upper limits at 95% confidence level are set on the product of cross section and branching fraction squared for the pair production of excited top quarks in the t^∗ → tg decay channel. The upper limits range from 120 to 0.8 fb for a t^∗ with spin-1/2 and from 15 to 1.0 fb for a t∗ with spin-3/2. These correspond to mass exclusion limits up to 1050 and 1700 GeV for spin-1/2 and spin-3/2 t^∗ particles, respectively. These are the most stringent limits to date on the existence of t^∗ → tg resonances

    Reweighting simulated events using machine-learning techniques in the CMS experiment

    Get PDF
    Data analyses in particle physics rely on an accurate simulation of particle collisions and a detailed simulation of detector effects to extract physics knowledge from the recorded data. Event generators together with a geant-based simulation of the detectors are used to produce large samples of simulated events for analysis by the LHC experiments. These simulations come at a high computational cost, where the detector simulation and reconstruction algorithms have the largest CPU demands. This article describes how machine-learning (ML) techniques are used to reweight simulated samples obtained with a given set of parameters to samples with different parameters or samples obtained from entirely different simulation programs. The ML reweighting method avoids the need for simulating the detector response multiple times by incorporating the relevant information in a single sample through event weights. Results are presented for reweighting to model variations and higher-order calculations in simulated top quark pair production at the LHC. This ML-based reweighting is an important element of the future computing model of the CMS experiment and will facilitate precision measurements at the High-Luminosity LHC
    corecore