5 research outputs found

    The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons. In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation

    CMS DAQ Current and Future Hardware Upgrades up to Post Long Shutdown 3 (LS3) Times

    No full text
    Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013-2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns. Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared. This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution. In particular, post LS3 DAQ architectures are focused upon

    Opportunistic usage of the CMS online cluster using a cloud overlay

    No full text
    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn't used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres

    Energy Resolution of the Barrel of the CMS Electromagnetic Calorimeter

    Get PDF
    The energy resolution of the barrel part of the CMS Electromagnetic Calorimeter has been studied using electrons of 20 to 250 GeV in a test beam. The incident electron's energy was reconstructed by summing the energy measured in arrays of 3x3 or 5x5 channels. There was no significant amount of correlated noise observed within these arrays. For electrons incident at the centre of the studied 3x3 arrays of crystals, the mean stochastic term was measured to be 2.8% and the mean constant term to be 0.3%. The amount of the incident electron's energy which is contained within the array depends on its position of incidence. The variation of the containment with position is corrected for using the distribution of the measured energy within the array. For uniform illumination of a crystal with 120 GeV electrons a resolution of 0.5% was achieved. The energy resolution meets the design goal for the detector

    Results of the first performance tests of the CMS electromagnetic calorimeter

    No full text
    CMS ECALPerformance tests of some aspects of the CMS ECAL were carried out on modules of the "barrel" sub-system in 2002 and 2003. A brief test with high energy electron beams was made in late 2003 to validate prototypes of the new Very Front End electronics. The final versions of the monitoring and cooling systems, and of the high and low voltage regulation were used in these tests. The results are consistent with the performance targets including those for noise and overall energy resolution, required to fulfil the physics programme of CMS at the LHC
    corecore