10 research outputs found

    Optimizing momentum resolution with a new fitting method for silicon-strip detectors

    Full text link
    A new fitting method is explored for momentum reconstruction of tracks in a constant magnetic field for a silicon-strip tracker. Substantial increases of momentum resolution respect to standard fit is obtained. The key point is the use of a realistic probability distribution for each hit (heteroscedasticity). Two different methods are used for the fits, the first method introduces an effective variance for each hit, the second method implements the maximum likelihood search. The tracker model is similar to the PAMELA tracker. Each side, of the two sided of the PAMELA detectors, is simulated as momentum reconstruction device. One of the two is similar to silicon micro-strip detectors of large use in running experiments. Two different position reconstructions are used for the standard fits, the η\eta-algorithm (the best one) and the two-strip center of gravity. The gain obtained in momentum resolution is measured as the virtual magnetic field and the virtual signal-to-noise ratio required by the two standard fits to reach an overlap with the best of two new methods. For the best side, the virtual magnetic field must be increased 1.5 times respect to the real field to reach the overlap and 1.8 for the other. For the high noise side, the increases must be 1.8 and 2.0. The signal-to-noise ratio has similar increases but only for the η\eta-algorithm. The signal-to-noise ratio has no effect on the fits with the center of gravity. Very important results are obtained if the number N of detecting layers is increased, our methods provide a momentum resolution growing linearly with N, much higher than standard fits that grow as the N\sqrt{N}.Comment: This article supersedes arXiv:1606.03051, 22 pages and 10 figure

    Performance of the H -> ZZ* -> 4 mu analysis with the CMS Phase-2 muon detector upgrade

    Get PDF
    A study of the performance of the Standard Model H(125 GeV) -> ZZ* -> 4 mu analysis with the upgraded CMS muon detector for the high luminosity operation (up to 7.5 x 10^{34} cm^{-2}s^{-1}) at the Large Hadron Collider (HL-LHC) has been carried out. The CMS Phase-2 muon detector upgrade, which will take place from 2024 to 2026, is crucial to efficiently exploit the data that will be collected at the HL-LHC, under very challenging experimental conditions, such as large number of overlapping collisions per event, up to 200 (referred to as pile-up). The H -> ZZ* -> 4 mu analysis essentially relies on the muon system to detect four muons in the final state; it therefore represents an important benchmark for the proposed muon detector upgrade, in particular the extension in pseudorapidity of the muon detector from |eta| < 2.4 to |eta| < 2.8 by introducing the ME0 (Muon Endcap 0) subdetector. The results presented here are obtained using simulated proton-proton collision events at sqrt{s} = 14 TeV with the full simulation of the upgraded CMS detector. The signal and background events are simulated with an average pile-up of 200. The analysis is performed assuming an integrated luminosity of 3000 fb^{-1} (expected in almost ten years of the HL-LHC operation) and follows the analysis of the 2016 data (Run-2). It is found that the signal selection efficiency remains immune to the high pile-up conditions and increases by 17% with the acceptance extension of the muon system

    Observability of a Heavy Higgs Boson with the CMS Detector at the LHC in the Channel qq+H and H->llqq

    Get PDF

    Development of Trigger and Control Systems for CMS

    Get PDF
    During the year of 2007, the Large Hadron Collider (LHC) and its four main detectors will begin operation with a view to answering the most pressing questions in particle physics. However before one can analyse the data produced to find the rare phenomena being looked for, both the detector and readout electronics must be thoroughly tested to ensure that the system will operate in a consistent way. The Compact Muon Solenoid (CMS) is one of the two general-purpose detectors at CERN. The tracking component of the design produces more data than any previous detector used in particle physics, with approximately ten million detector channels. The data from the detector is processed by the tracker Front End Driver (FED). The large data volume necessitated the development of a buffering and throttling system to prevent buffer overflow both on and off the detector. A critical component of this system is the APV emulator (APVe), which vetoes trigger decisions based on buffer status in the tracker. The commissioning of these components, along with a large part of the Timing, Trigger and Control (TTC) system is discussed, including the various modifications that were made to improve the robustness of the full system. Another key piece of the CMS electronics is the calorimeter trigger system, responsible for identifying âinteresting' physical events in a background of well-understood phenomena using calorimetric information. Calorimeter information is processed to identify various trigger objects by the Global Calorimeter Trigger (GCT). The first component of this system is the Source card, which has been developed to transfer data from the Regional Calorimeter Trigger (RCT) to the Leaf card, the processing engine of the GCT. The use of modern programmable logic with high speed optical links is discussed, emphasising its use for data concentration and the benefit it confers to the processing algorithms. Looking forward to Super-LHC, a possible addition to the CMS Level-1 trigger system is discussed, incorporating information from a new pixel detector with an alternative stacked geometry that allows the possibility of on-detector data rate reduction by means of a transverse momentum cut. A toy Monte Carlo was developed to study detector performance. Issues with high-speed reconstruction and the complications of on-detector data rate reduction are also discussed

    Perspektiven zur Beobachtung der elektroschwachen Produktion einzelner Top-Quarks mit dem CMS Experiment

    Get PDF

    Particle Physics Reference Library

    Get PDF
    This second open access volume of the handbook series deals with detectors, large experimental facilities and data handling, both for accelerator and non-accelerator based experiments. It also covers applications in medicine and life sciences. A joint CERN-Springer initiative, the “Particle Physics Reference Library” provides revised and updated contributions based on previously published material in the well-known Landolt-Boernstein series on particle physics, accelerators and detectors (volumes 21A,B1,B2,C), which took stock of the field approximately one decade ago. Central to this new initiative is publication under full open access

    A Gaussian-Sum Filter for Vertex Reconstruction

    No full text
    A vertex reconstruction algorithm was developed based on the Gaussian-sum filter (GSF) and implemented in the framework of the CMS reconstruction program. While linear least-square estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of non-Gaussian distributions of track parameter errors when these are modelled by Gaussian mixtures. In addition, this ensures better protection against outliers and offers some robustness
    corecore