74 research outputs found

    10 Gbps TCP/IP streams from the FPGA for High Energy Physics

    Get PDF
    The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi

    The new CMS DAQ system for LHC operation after 2014 (DAQ2)

    Get PDF
    The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi

    First measurements with the CMS DAQ and Timing Hub prototype-1

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment. In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems. One important requirement is the distribution of several high-precision, phase-stable, and LHC-synchronous clock signals for use by the timing detectors. The current paper presents first measurements performed on the initial prototype, with a focus on clock quality. It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.United States. Department of EnergyNational Science Foundation (U.S.

    Design and development of the DAQ and Timing Hub for CMS Phase-2

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons. The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026. The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz. At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s. Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern. A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage. This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends. This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.United States. Department of EnergyNational Science Foundation (U.S.

    First measurements with the CMS DAQ and Timing Hub prototype-1

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment. In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems. One important requirement is the distribution of several high-precision, phase-stable, and LHC-synchronous clock signals for use by the timing detectors. The current paper presents first measurements performed on the initial prototype, with a focus on clock quality. It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.United States. Department of EnergyNational Science Foundation (U.S.

    Extending the remote control capabilities in the CMS detector control system with remote procedure call services

    No full text
    Copyright © 2019 by JACoW — cc Creative Commons Attribution 3.0. The CMS Detector Control System (DCS) is implemented as a large distributed and redundant system, with applications interacting and sharing data in multiple ways. The CMS XML-RPC is a software toolkit implementing the standard Remote Procedure Call (RPC) protocol, using the Extensible Mark-up Language (XML) and a custom lightweight variant using the JavaScript Object Notation (JSON) to model, encode and expose resources through the Hypertext Transfer Protocol (HTTP). The CMS XML-RPC toolkit complies with the standard specification of the XML-RPC protocol that allows system developers to build collaborative software architectures with self-contained and reusable logic, and with encapsulation of well-defined processes. The implementation of this protocol introduces not only a powerful communication method to operate and exchange data with web-based applications, but also a new programming paradigm to design service-oriented software architectures within the CMS DCS domain. This paper presents details of the CMS XML-RPC implementation in WinCC Open Architecture (OA) Control Language using an object-oriented approach.Swiss National Science Foundatio

    The CMS Event-Builder System for LHC Run 3 (2021-23)

    No full text
    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on O(1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.</jats:p

    Design and development of the DAQ and Timing Hub for CMS Phase-2

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons. The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026. The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz. At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s. Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern. A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage. This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends. This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.United States. Department of EnergyNational Science Foundation (U.S.

    Operational experience with the new CMS DAQ-Expert

    No full text
    The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production
    • …
    corecore