1,736 research outputs found

    Optimization of the CMS software build and distribution system

    Get PDF
    CMS software consists of over two million lines of code actively developed by hundreds of developers from all around the world. Optimal build, release and distribution of such a large-scale system for production and analysis activities for hundreds of sites and multiple platforms are quite a challenge. Its dependency on more than one hundred external tools makes its build and distribution more complex. We describe how parallel building of the software and minimalizing the size of the distribution dramatically reduced the time gap between software build and installation on remote sites, and how producing few big binary products, instead of thousands small ones, helped finding out some integration and runtime issues of the software

    CMS Monte Carlo production in the WLCG computing Grid

    Get PDF
    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

    The CMS Monte Carlo Production System: Development and Design

    Get PDF
    The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment

    Distributed Computing Grid Experiences in CMS

    Get PDF
    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    Performance and Operation of the CMS Electromagnetic Calorimeter

    Get PDF
    The operation and general performance of the CMS electromagnetic calorimeter using cosmic-ray muons are described. These muons were recorded after the closure of the CMS detector in late 2008. The calorimeter is made of lead tungstate crystals and the overall status of the 75848 channels corresponding to the barrel and endcap detectors is reported. The stability of crucial operational parameters, such as high voltage, temperature and electronic noise, is summarised and the performance of the light monitoring system is presented

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    Calibration of the CMS Drift Tube Chambers and Measurement of the Drift Velocity with Cosmic Rays

    Get PDF
    Peer reviewe

    CMS Data Processing Workflows during an Extended Cosmic Ray Run

    Get PDF
    Peer reviewe

    Long- and short-range correlations and their event-scale dependence in high-multiplicity pp collisions at 1as = 13 TeV

    Get PDF
    Two-particle angular correlations are measured in high-multiplicity proton-proton collisions at s = 13 TeV by the ALICE Collaboration. The yields of particle pairs at short-( 06\u3b7 3c 0) and long-range (1.6 < | 06\u3b7| < 1.8) in pseudorapidity are extracted on the near-side ( 06\u3c6 3c 0). They are reported as a function of transverse momentum (pT) in the range 1 < pT< 4 GeV/c. Furthermore, the event-scale dependence is studied for the first time by requiring the presence of high-pT leading particles or jets for varying pT thresholds. The results demonstrate that the long-range \u201cridge\u201d yield, possibly related to the collective behavior of the system, is present in events with high-pT processes as well. The magnitudes of the short- and long-range yields are found to grow with the event scale. The results are compared to EPOS LHC and PYTHIA 8 calculations, with and without string-shoving interactions. It is found that while both models describe the qualitative trends in the data, calculations from EPOS LHC show a better quantitative agreement for the pT dependency, while overestimating the event-scale dependency. [Figure not available: see fulltext.
    • 

    corecore