131 research outputs found

    Hierarchical Control of the ATLAS Experiment

    Get PDF
    Control systems at High Energy Physics (HEP) experiments are becoming increasingly complex mainly due to the size, complexity and data volume associated to the front-end instrumentation. In particular, this becomes visible for the ATLAS experiment at the LHC accelerator at CERN. ATLAS will be the largest particle detector ever built, result of an international collaboration of more than 150 institutes. The experiment is composed of 9 different specialized sub-detectors that perform different tasks and have different requirements for operation. The system in charge of the safe and coherent operation of the whole experiment is called Detector Control System (DCS). This thesis presents the integration of the ATLAS DCS into a global control tree following the natural segmentation of the experiment into sub-detectors and smaller sub-systems. The integration of the many different systems composing the DCS includes issues such as: back-end organization, process model identification, fault detection, synchronization with external systems, automation of processes and supervisory control. Distributed control modeling is applied to the widely distributed devices that coexist in ATLAS. Thus, control is achieved by means of many distributed, autonomous and co-operative entities that are hierarchically organized and follow a finite-state machine logic. The key to integration of these systems lies in the so called Finite State Machine tool (FSM), which is based on two main enabling technologies: a SCADA product, and the State Manager Interface (SMI++) toolkit. The SMI++ toolkit has been already used with success in two previous HEP experiments providing functionality such as: an object-oriented language, a finite-state machine logic, an interface to develop expert systems, and a platform-independent communication protocol. This functionality is then used at all levels of the experiment operation process, ranging from the overall supervision down to device integration, enabling the overall sequencing and automation of the experiment. Although the experience gained in the past is an important input for the design of the detector's control hierarchy, further requirements arose due to the complexity and size of ATLAS. In total, around 200.000 channels will be supervised by the DCS and the final control tree will be hundreds of times bigger than any of the antecedents. Thus, in order to apply a hierarchical control model to the ATLAS DCS, a common approach has been proposed to ensure homogeneity between the large-scale distributed software ensembles of sub-detectors. A standard architecture and a human interface have been defined with emphasis on the early detection, monitoring and diagnosis of faults based on a dynamic fault-data mechanism. This mechanism relies on two parallel communication paths that manage the faults while providing a clear description of the detector conditions. The DCS information is split and handled by different types of SMI++ objects; whilst one path of objects manages the operational mode of the system, the other is to handle eventual faults. The proposed strategy has been validated through many different tests with positive results in both functionality and performance. This strategy has been successfully implemented and constitutes the ATLAS standard to build the global control tree. During the operation of the experiment, the DCS, responsible for the detector operation, must be synchronized with the data acquisition system which is in charge of the physics data taking process. The interaction between both systems has so far been limited, but becomes increasingly important as the detector nears completion. A prototype implementation, ready to be used during the sub-detector integration, has achieved data reconciliation by mapping the different segments of the data acquisition system into the DCS control tree. The adopted solution allows the data acquisition control applications to command different DCS sections independently and prevents incorrect physics data taking caused by a failure in a detector part. Finally, the human-machine interface presents and controls the DCS data in the ATLAS control room. The main challenges faced during the design and development phases were: how to support the operator in controlling this large system, how to maintain integration across many displays, and how to provide an effective navigation. These issues have been solved by combining the functionalities provided by both, the SCADA product and the FSM tool. The control hierarchy provides an intuitive structure for the organization of many different displays that are needed for the visualization of the experiment conditions. Each node in the tree represents a workspace that contains the functional information associated with its abstraction level within the hierarchy. By means of an effective navigation, any workspace of the control tree is accessible by the operator or detector expert within a common human interface layout. The interface is modular and flexible enough to be accommodated to new operational scenarios, fulfil the necessities of the different kind of users and facilitate the maintenance during the long lifetime of the detector of up to 20 years. The interface is in use since several months, and the sub-detector's control hierarchies, together with their associated displays, are currently being integrated into the common human-machine interface

    Ethernet Networks for Real-Time Use in the ATLAS Experiment

    Get PDF
    Ethernet became today's de-facto standard technology for local area networks. Defined by the IEEE 802.3 and 802.1 working groups, the Ethernet standards cover technologies deployed at the first two layers of the OSI protocol stack. The architecture of modern Ethernet networks is based on switches. The switches are devices usually built using a store-and-forward concept. At the highest level, they can be seen as a collection of queues and mathematically modelled by means of queuing theory. However, the traffic profiles on modern Ethernet networks are rather different from those assumed in classical queuing theory. The standard recommendations for evaluating the performance of network devices define the values that should be measured but do not specify a way of reconciling these values with the internal architecture of the switches. The introduction of the 10 Gigabit Ethernet standard provided a direct gateway from the LAN to the WAN by the means of the WAN PHY. Certain aspects related to the actual use of WAN PHY technology were vaguely defined by the standard. The ATLAS experiment at CERN is scheduled to start operation at CERN in 2007. The communication infrastructure of the Trigger and Data Acquisition System will be built using Ethernet networks. The real-time operational needs impose a requirement for predictable performance on the network part. In view of the diversity of the architectures of Ethernet devices, testing and modelling is required in order to make sure the full system will operate predictably. This thesis focuses on the testing part of the problem and addresses issues in determining the performance for both LAN and WAN connections. The problem of reconciling results from measurements to architectural details of the switches will also be tackled. We developed a scalable traffic generator system based on commercial-off-the-shelf Gigabit Ethernet network interface cards. The generator was able to transmit traffic at the nominal Gigabit Ethernet line rate for all frame sizes specified in the Ethernet standard. The calculation of latency was performed with accuracy in the range of +/- 200 ns. We indicate how certain features of switch architectures may be identified through accurate throughput and latency values measured for specific traffic distributions. At this stage, we present a detailed analysis of Ethernet broadcast support in modern switches. We use a similar hands-on approach to address the problem of extending Ethernet networks over long distances. Based on the 1 Gbit/s traffic generator used in the LAN, we develop a methodology to characterise point-to-point connections over long distance networks. At higher speeds, a combination of commercial traffic generators and high-end servers is employed to determine the performance of the connection. We demonstrate that the new 10 Gigabit Ethernet technology can interoperate with the installed base of SONET/SDH equipment through a series of experiments on point-to-point circuits deployed over long-distance network infrastructure in a multi-operator domain. In this process, we provide a holistic view of the end-to-end performance of 10 Gigabit Ethernet WAN PHY connections through a sequence of measurements starting at the physical transmission layer and continuing up to the transport layer of the OSI protocol stack

    Report from the Luminosity Task Force

    Get PDF

    Error management in ATLAS TDAQ : an intelligent systems approach

    Get PDF
    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems

    Technical Proposal for FASER: ForwArd Search ExpeRiment at the LHC

    Full text link
    FASER is a proposed small and inexpensive experiment designed to search for light, weakly-interacting particles during Run 3 of the LHC from 2021-23. Such particles may be produced in large numbers along the beam collision axis, travel for hundreds of meters without interacting, and then decay to standard model particles. To search for such events, FASER will be located 480 m downstream of the ATLAS IP in the unused service tunnel TI12 and be sensitive to particles that decay in a cylindrical volume with radius R=10 cm and length L=1.5 m. FASER will complement the LHC's existing physics program, extending its discovery potential to a host of new, light particles, with potentially far-reaching implications for particle physics and cosmology. This document describes the technical details of the FASER detector components: the magnets, the tracker, the scintillator system, and the calorimeter, as well as the trigger and readout system. The preparatory work that is needed to install and operate the detector, including civil engineering, transport, and integration with various services is also presented. The information presented includes preliminary cost estimates for the detector components and the infrastructure work, as well as a timeline for the design, construction, and installation of the experiment.Comment: 82 pages, 62 figures; submitted to the CERN LHCC on 7 November 201

    Installation, Commissioning and Calibration of the ATLAS Level-1 Calorimeter Trigger in Run 3

    Get PDF
    In preparation for Run 3, when the LHC will operate at higher energies and instantaneous luminosity, the hardware-based ATLAS Level-1 calorimeter trigger underwent a number of improvements. Among them are three new identification systems that use already digitised data with higher granularity from the new Phase-1 LAr calorimeter system in improved algorithms to identify events with physics objects of interest. These identification systems continue to receive data from the Tile calorimeter from the old Run 2 legacy Level-1 calorimeter trigger system via the newly introduced TREX module. Among other things, this intervention in the legacy system, which took over early data taking during the ongoing commissioning of the new L1Calo system at the start of Run 3, required it to be recalibrated and commissioned. This thesis gives a general overview of the time from the development and installation of the new Level-1 calorimeter trigger hardware, via the recalibration and commissioning of the legacy system with the new TREX module, to the studies that led to the change of the single electron trigger from the legacy system to the new Run 3 system

    Error management in ATLAS TDAQ : an intelligent systems approach

    Get PDF
    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Prospects for the detection of the chargino-neutralino direct production with the ATLAS detector at the LHC

    Get PDF
    The Large Hadron Collider (LHC), currently under installation at CERN, is designed to provide high-energy proton collisions at the TeV energy scale, with a large instantaneous luminosity. This will allow to explore an energy region never reached by the previous accelerators and to search for new physics, also beyond the Standard Model (SM), as expected by a wide range of models. ATLAS (A Toroidal LHC Apparatus) is one of the four experiments which will be installed at the LHC. It is a general-purpose experiment which address the investigation of the full discovery potential provided by the LHC. Chapter 1 is dedicated to the description of the accelerator, the ATLAS experiment and its discovery capabilities. ATLAS is a large and complex experiment, accounting roughly 10810^8 electronic channels. Its trigger and data acquisition systems will be able to select and save few interesting events in between millions. Hence, to bring ATLAS to its maximum performances, a complete and effective monitoring system, able to facilitate the reaching of the correct running conditions and the assessing of the data quality, will be needed. The development of such monitoring tools started during the past beam tests and, at present, it continues supporting the detector commissioning and installation phase. In chapter 2, the development of a lightweight low-level monitoring framework, devoted to the hardware-functionality monitoring, is discussed. Presently the SM is not considered as an ultimate theory, and therefore new models are studied in order to find answers to open questions. Among these theories, the supersymmetries provide a framework that can possibly solve some theoretical problems, such as the hierarchy problem. Up to now, no experimental evidences of supersymmetries were found, however, if they exist, the LHC experiments could possibly find their signatures. An introduction to supersymmetrical theories, particularly focused on gaugino physics, is the object of chapter 3. Among the several signatures predicted by the supersymmetric models, the decay of χ~02χ~1±\tilde\chi_0^2\tilde\chi_1^\pm gaugino pairs into three leptons and missing transverse energy is particularly interesting. Indeed this channel has a low SM backgrounds, especially from QCD, and can provide information on the model parameters. Hence, we developed, through fast simulation data, a search strategy for the trilepton channel, within the ATLAS detector, for a large number of models. The results of this analysis are reported in chapter 4. Most of the LHC discovery potential is driven by its large target luminosity of 103410^{34} cm−2^{-2}s−1^{-1}. However, to reach this target, fine optimizations of beam optics and tuning are necessary. Moreover, experiments may need to know the bunch-by-bunch luminosity, in order to correct physics results for pile-up events. Hence, a luminometer, able to high-precision bunch-by-bunch relative luminosity measurements, will be an effective tool both for the accelerator and the experiments. In chapter 5, the development of a LHC luminometer, based on a fast radiation-hard argon ionization chamber, performed at the Lawrence Berkeley National Laboratory, is discussed

    Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs

    Full text link
    The ATLAS experiment at CERN measures energy of proton-proton (p-p) collisions with a repetition frequency of 40 MHz at the Large Hadron Collider (LHC). The readout electronics of liquid-argon (LAr) calorimeters are being prepared for high luminosity-LHC (HL-LHC) operation as part of the phase-II upgrade, anticipating a pileup of up to 200 simultaneous p-p interactions. The increase of the number of p-p interactions implies that calorimeter signals of up to 25 consecutive collisions overlap, making energy reconstruction more challenging. In order to achieve the goal of the HL-HLC, field-programmable gate arrays (FPGAs) are used to process digitized pulses sampled at 40 MHz in real time and different machine learning approaches are being investigated to deal with signal pileup. The convolutional and recurrent neural networks outperform the optimal signal filter currently in use, both in terms of assigning the reconstructed energy to the correct proton bunch crossing and in terms of energy resolution. The enhancements are focused on energy obtained from overlapping pulses. Because the neural networks are implemented on an FPGA, the number of parameters, resource usage, latency and operation frequency must be carefully analysed. A very good agreement is observed between neural network implementations in FPGA and software.Comment: TWEPP-2021, http://cds.cern.ch/record/278463
    • 

    corecore