77 research outputs found

    Dynamic configuration of the CMS Data Acquisition cluster

    Get PDF
    The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or re-structured in case of hardware faults. This paper presents the CMS DAQ Configurator tool, which is used to generate comprehensive configurations of the CMS DAQ system based on a high-level description given by the user. Using a database of configuration templates and a database containing a detailed model of hardware modules, data and control links, nodes and the network topology, the tool automatically determines which applications are needed, on which nodes they should run, and over which networks the event traffic will flow. The tool computes application parameters and generates the XML configuration documents as well as the configuration of the run-control system. The performance of the tool and operational experience during CMS commissioning and the first LHC runs are discussed

    An analysis of the control hierarchy modeling of the CMS detector control system

    Get PDF
    The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system

    The Data Acquisition System of the CMS Experiment at LHC

    No full text
    The data acquisition (DAQ) system of the CMS experiment at LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GBytes/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking

    The design of a distributed key-value store for petascale hot storage in data acquisition systems

    No full text
    Data acquisition systems for high energy physics experiments readout terabytes of data per second from a large number of electronic components. They are thus inherently distributed systems and require fast online data selection, otherwise requirements for permanent storage would be enormous. Still, incoming data need to be buffered while waiting for this selection to happen. Each minute of an experiment can produce hundreds of terabytes that cannot be lost before a selection decision is made. In this context, we present the design of DAQDB (Data Acquisition Database) — a distributed key-value store for high-bandwidth, generic data storage in event-driven systems. DAQDB offers not only high-capacity and low-latency buffer for fast data selection, but also opens a new approach in high-bandwidth data acquisition by decoupling the lifetime of the data analysis processes from the changing event rate due to the duty cycle of the data source. This is achievable by the option to extend its capacity even up to hundreds of petabytes to store hours of an experiment’s data. Our initial performance evaluation shows that DAQDB is a promising alternative to generic database solutions for the high luminosity upgrades of the LHC at CERN

    The design of a distributed key-value store for petascale hot storage in data acquisition systems

    Get PDF
    Data acquisition systems for high energy physics experiments readout terabytes of data per second from a large number of electronic components. They are thus inherently distributed systems and require fast online data selection, otherwise requirements for permanent storage would be enormous. Still, incoming data need to be buffered while waiting for this selection to happen. Each minute of an experiment can produce hundreds of terabytes that cannot be lost before a selection decision is made. In this context, we present the design of DAQDB (Data Acquisition Database) — a distributed key-value store for high-bandwidth, generic data storage in event-driven systems. DAQDB offers not only high-capacity and low-latency buffer for fast data selection, but also opens a new approach in high-bandwidth data acquisition by decoupling the lifetime of the data analysis processes from the changing event rate due to the duty cycle of the data source. This is achievable by the option to extend its capacity even up to hundreds of petabytes to store hours of an experiment’s data. Our initial performance evaluation shows that DAQDB is a promising alternative to generic database solutions for the high luminosity upgrades of the LHC at CERN

    Let’s get our hands dirty: a comprehensive evaluation of DAQDB, key-value store for petascale hot storage

    Get PDF
    Data acquisition systems are a key component for successful data taking in any experiment. The DAQ is a complex distributed computing system and coordinates all operations, from the data selection stage of interesting events to storage elements. For the High Luminosity upgrade of the Large Hadron Collider, the experiments at CERN need to meet challenging requirements to record data with a much higher occupancy in the detectors. The DAQ system will receive and deliver data with a significantly increased trigger rate, one million events per second, and capacity, terabytes of data per second. An effective way to meet these requirements is to decouple real-time data acquisition from event selection. Data fragments can be temporarily stored in a large distributed key-value store. Fragments belonging to the same event can be then queried on demand, by the data selection processes. Implementing such a model relies on a proper combination of emerging technologies, such as persistent memory, NVMe SSDs, scalable networking, and data structures, as well as high performance, scalable software. In this paper, we present DAQDB (Data Acquisition Database) — an open source implementation of this design that was presented earlier, with an extensive evaluation of this approach, from the single node to the distributed performance. Furthermore, we complement our study with a description of the challenges faced and the lessons learned while integrating DAQDB with the existing software framework of the ATLAS experiment

    Let’s get our hands dirty: a comprehensive evaluation of DAQDB, key-value store for petascale hot storage

    No full text
    Data acquisition systems are a key component for successful data taking in any experiment. The DAQ is a complex distributed computing system and coordinates all operations, from the data selection stage of interesting events to storage elements. For the High Luminosity upgrade of the Large Hadron Collider, the experiments at CERN need to meet challenging requirements to record data with a much higher occupancy in the detectors. The DAQ system will receive and deliver data with a significantly increased trigger rate, one million events per second, and capacity, terabytes of data per second. An effective way to meet these requirements is to decouple real-time data acquisition from event selection. Data fragments can be temporarily stored in a large distributed key-value store. Fragments belonging to the same event can be then queried on demand, by the data selection processes. Implementing such a model relies on a proper combination of emerging technologies, such as persistent memory, NVMe SSDs, scalable networking, and data structures, as well as high performance, scalable software. In this paper, we present DAQDB (Data Acquisition Database) — an open source implementation of this design that was presented earlier, with an extensive evaluation of this approach, from the single node to the distributed performance. Furthermore, we complement our study with a description of the challenges faced and the lessons learned while integrating DAQDB with the existing software framework of the ATLAS experiment
    • …
    corecore