2,580 research outputs found

    The CMS Computing System: Successes and Challenges

    Full text link
    Each LHC experiment will produce datasets with sizes of order one petabyte per year. All of this data must be stored, processed, transferred, simulated and analyzed, which requires a computing system of a larger scale than ever mounted for any particle physics experiment, and possibly for any enterprise in the world. I discuss how CMS has chosen to address these challenges, focusing on recent tests of the system that demonstrate the experiment's readiness for producing physics results with the first LHC data.Comment: To be published in the proceedings of DPF-2009, Detroit, MI, July 2009, eConf C09072

    CMS software and computing for LHC Run 2

    Full text link
    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on our prompt reconstruction farm for 2015 and across the CMS WLCG Tier-1 facilities. We will discuss our experience with a analysis data format which is ten times smaller than our primary Run 1 format. This "miniAOD" format has proven to be easier to analyze while be extremely flexible for analysts. Finally, we describe improvements to our workflow management system that have resulted in increased automation and reliability for all facets of CMS production and user analysis operations.Comment: Contribution to proceedings of the 38th International Conference on High Energy Physics (ICHEP 2016

    W Boson Cross Section and Decay Properties at the Tevatron

    Get PDF
    We present the first measurements of sigma(p\bar{p} -> W -> l nu) and sigma(p\bar{p} -> Z -> l l) at sqrt{s} = 1.96 TeV, along with new measurements of W angular-decay distributions in p\bar{p} collisions at sqrt{s} = 1.8 TeV.Comment: Submitted to ICHEP 2002 proceeding

    CMS Computing: Performance and Outlook

    Full text link
    After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for instantaneous luminosity, and CMS continues to record data at 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to larger and larger event sizes and processing times. The CMS computing system has responded admirably to these challenges. We present the current status of the system, describe the recent performance, and discuss the challenges ahead and how we intend to meet them.Comment: Contribution to Proceedings of the DPF-2011 Conference, Providence, RI, August 8-12, 201

    CMS Use of a Data Federation

    Get PDF
    CMS is in the process of deploying an Xrootd based infrastructure to facilitate a global data federation. The services of the federation are available to export data from half the physical capacity and the majority of sites are configured to read data over the federation as a back-up. CMS began with a relatively modest set of use-cases for recovery of failed local file opens, debugging and visualization. CMS is finding that the data federation can be used to support small scale analysis and load balancing. Looking forward we see potential in using the federation to provide more flexibility in the location workflows are executed as the difference between local access and wide area access are diminished by optimization and improved networking. In this presentation we discuss the application development work and the facility deployment work, the use-cases currently in production, and the potential for the technology moving forward

    CMS Use of a Data Federation

    Get PDF
    CMS is in the process of deploying an Xrootd based infrastructure to facilitate a global data federation. The services of the federation are available to export data from half the physical capacity and the majority of sites are configured to read data over the federation as a back-up. CMS began with a relatively modest set of use-cases for recovery of failed local file opens, debugging and visualization. CMS is finding that the data federation can be used to support small scale analysis and load balancing. Looking forward we see potential in using the federation to provide more flexibility in the location workflows are executed as the difference between local access and wide area access are diminished by optimization and improved networking. In this presentation we discuss the application development work and the facility deployment work, the use-cases currently in production, and the potential for the technology moving forward
    • …
    corecore