82 research outputs found

    10 Gbps TCP/IP streams from the FPGA for High Energy Physics

    Get PDF
    The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi

    The new CMS DAQ system for LHC operation after 2014 (DAQ2)

    Get PDF
    The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi

    The CMS Orbit Builder for the HL-LHC at CERN

    Get PDF
    The Compact Muon Solenoid (CMS) experiment at CERN incorporates one of the highest throughput data acquisition systems in the world and is expected to increase its throughput by more than a factor of ten for High-Luminosity phase of Large Hadron Collider (HL-LHC). To achieve this goal, the system will be upgraded in most of its components. Among them, the event builder software, in charge of assembling all the data read out from the different sub-detectors, is planned to be modified from a single event builder to an orbit builder that assembles multiple events at the same time. The throughput of the event builder will be increased from the current 1.6 Tb/s to 51 Tb/s for the HL-LHC orbit builder. This paper presents preliminary network transfer studies in preparation for the upgrade. The key conceptual characteristics are discussed, concerning differences between the CMS event builder in Run 3 and the CMS Orbit Builder for the HL-LHC. For the feasibility studies, a pipestream benchmark, mimicking event-builder-like traffic has been developed. Preliminary performance tests and results are discussed

    MiniDAQ-3: Providing concurrent independent subdetector data-taking on CMS production DAQ resources

    Get PDF
    The data acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment at CERN, collects data for events accepted by the Level-1 Trigger from the different detector systems and assembles them in an event builder prior to making them available for further selection in the High Level Trigger, and finally storing the selected events for offline analysis. In addition to the central DAQ providing global acquisition functionality, several separate, so-called “MiniDAQ” setups allow operating independent data acquisition runs using an arbitrary subset of the CMS subdetectors. During Run 2 of the LHC, MiniDAQ setups were running their event builder and High Level Trigger applications on dedicated resources, separate from those used for the central DAQ. This cleanly separated MiniDAQ setups from the central DAQ system, but also meant limited throughput and a fixed number of possible MiniDAQ setups. In Run 3, MiniDAQ-3 setups share production resources with the new central DAQ system, allowing each setup to operate at the maximum Level-1 rate thanks to the reuse of the resources and network bandwidth. Configuration management tools had to be significantly extended to support the synchronization of the DAQ configurations needed for the various setups. We report on the new configuration management features and on the first year of operational experience with the new MiniDAQ-3 system

    Towards a container-based architecture for CMS data acquisition

    Get PDF
    The CMS data acquisition (DAQ) is implemented as a service-oriented architecture where DAQ applications, as well as general applications such as monitoring and error reporting, are run as self-contained services. The task of deployment and operation of services is achieved by using several heterogeneous facilities, custom configuration data and scripts in several languages. In this work, we restructure the existing system into a homogeneous, scalable cloud architecture adopting a uniform paradigm, where all applications are orchestrated in a uniform environment with standardized facilities. In this new paradigm DAQ applications are organized as groups of containers and the required software is packaged into container images. Automation of all aspects of coordinating and managing containers is provided by the Kubernetes environment, where a set of physical and virtual machines is unified in a single pool of compute resources. We demonstrate that a container-based cloud architecture provides an acrossthe-board solution that can be applied for DAQ in CMS. We show strengths and advantages of running DAQ applications in a container infrastructure as compared to a traditional application model

    First measurements with the CMS DAQ and Timing Hub prototype-1

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment. In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems. One important requirement is the distribution of several high-precision, phase-stable, and LHC-synchronous clock signals for use by the timing detectors. The current paper presents first measurements performed on the initial prototype, with a focus on clock quality. It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.United States. Department of EnergyNational Science Foundation (U.S.

    Design and development of the DAQ and Timing Hub for CMS Phase-2

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons. The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026. The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz. At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s. Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern. A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage. This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends. This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.United States. Department of EnergyNational Science Foundation (U.S.

    First measurements with the CMS DAQ and Timing Hub prototype-1

    No full text
    © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment. In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems. One important requirement is the distribution of several high-precision, phase-stable, and LHC-synchronous clock signals for use by the timing detectors. The current paper presents first measurements performed on the initial prototype, with a focus on clock quality. It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.United States. Department of EnergyNational Science Foundation (U.S.

    Extending the remote control capabilities in the CMS detector control system with remote procedure call services

    No full text
    Copyright © 2019 by JACoW — cc Creative Commons Attribution 3.0. The CMS Detector Control System (DCS) is implemented as a large distributed and redundant system, with applications interacting and sharing data in multiple ways. The CMS XML-RPC is a software toolkit implementing the standard Remote Procedure Call (RPC) protocol, using the Extensible Mark-up Language (XML) and a custom lightweight variant using the JavaScript Object Notation (JSON) to model, encode and expose resources through the Hypertext Transfer Protocol (HTTP). The CMS XML-RPC toolkit complies with the standard specification of the XML-RPC protocol that allows system developers to build collaborative software architectures with self-contained and reusable logic, and with encapsulation of well-defined processes. The implementation of this protocol introduces not only a powerful communication method to operate and exchange data with web-based applications, but also a new programming paradigm to design service-oriented software architectures within the CMS DCS domain. This paper presents details of the CMS XML-RPC implementation in WinCC Open Architecture (OA) Control Language using an object-oriented approach.Swiss National Science Foundatio
    corecore