34 research outputs found

    The ATLAS trigger - high-level trigger commissioning and operation during early data taking

    Get PDF
    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14~TeV, with a bunch-crossing rate of 40~MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200~Hz. This paper gives an overview of the ATLAS High Level Trigger focusing on the system design and its innovative features. We then present the ATLAS trigger strategy for the initial phase of LHC exploitation. Finally, we report on the valuable experience acquired through in-situ commissioning of the system where simulated events were used to exercise the trigger chain. In particular we show critical quantities such as event processing times, measured in a large-scale HLT farm using a complex trigger menu

    The ATLAS Trigger/DAQ Authorlist, version 1.0

    Get PDF
    This is a reference document giving the ATLAS Trigger/DAQ author list, version 1.0 of 20 Nov 2008

    The ATLAS Trigger/DAQ Authorlist, version 2.0

    Get PDF
    This is the ATLAS Trigger/DAQ Authorlist, version 2.0, 31 July 200

    The ATLAS Trigger/DAQ Authorlist, version 3.0

    Get PDF
    This is the ATLAS Trigger/DAQ Authorlist, version 3.0, 11 September 200

    The ATLAS Trigger/DAQ Authorlist, version 3.1

    Get PDF
    This is the ATLAS Trigger/DAQ Authorlist, version 3.1, 17 September 200

    ATLAS TDAQ Controls and Configuration software: Evolution from Run 2 to Run 3

    No full text
    The ATLAS experiment at the Large Hadron Collider (LHC) operated very successfully in the years 2008 to 2018, in two periods identified as Run 1 and Run 2. ATLAS achieved an overall data-taking efficiency of 94\%, largely constrained by the irreducible dead-time introduced to accommodate the limitations of the detector read-out electronics. Out of the 6\% dead-time only about 15\% could be attributed to the central trigger and DAQ system, and out of these, a negligible fraction was due to the Control and Configuration sub-system. Despite these achievements, and in order to improve even more the already excellent efficiency of the whole DAQ system in the coming Run 3, a new campaign of software updates was launched for the second long LHC shutdown (LS2). This paper presents, using a few selected examples, how the work was approached and which new technologies were introduced into the ATLAS DAQ system. Despite these being specific to this system, many solutions can be considered and adapted to different distributed DAQ systems

    25th International Conference on Computing in High Energy & Nuclear Physics

    No full text
    The ATLAS experiment at the Large Hadron Collider (LHC) op- erated very successfully in the years 2008 to 2018, in two periods identified as Run 1 and Run 2. ATLAS achieved an overall data-taking efficiency of 94%, largely constrained by the irreducible dead-time introduced to accommodate the limitations of the detector read-out electronics. Out of the 6% dead-time only about 15% could be attributed to the central trigger and DAQ system, and out of these, a negligible fraction was due to the Control and Configuration sub- system. Despite these achievements, and in order to improve even more the already excellent efficiency of the whole DAQ system in the coming Run 3, a new campaign of software updates was launched for the second long LHC shutdown (LS2). This paper presents, using a few selected examples, how the work was approached and which new technologies were introduced into the AT- LAS Control and Configuration software. Despite these being specific to this system, many solutions can be considered and adapted to different distributed DAQ systems

    SoC Interest Group Meeting

    No full text

    The ATLAS Data Flow System for Run 2

    No full text
    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. The Data Collection network, that connects the HLT processing nodes to the Readout and the storage systems has evolved to provide network connectivity as required by the new Data Flow architecture. The old Data Collection and Back-End networks have been merged into a single Ethernet network and the Readout PCs have been directly connected to the network cores. The aggregate throughput and port density have been increased by an order of magnitude and the introduction of Multi Chassis Trunking significantly enhanced fault tolerance and redundancy. We will discuss the design choices, the strategies employed to minimize the data-collection latency, the results of scaling tests done during the commissioning phase and the operational performance after the first months of data taking

    The ATLAS Data Flow System for LHC Run II

    No full text
    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. The Data Collection network, that connects the HLT processing nodes to the Readout and the storage systems has evolved to provide network connectivity as required by the new Data Flow architecture. The old Data Collection and Back-End networks have been merged into a single Ethernet network and the Readout PCs have been directly connected to the network cores. The aggregate throughput and port density have been increased by an order of magnitude and the introduction of Multi Chassis Trunking significantly enhanced fault tolerance and redundancy. We will discuss the design choices, the strategies employed to minimize the data-collection latency, architecture and implementation aspects of DF components
    corecore