27 research outputs found

    Implementation and performance of the event filter muon selection for the ATLAS experiment at LHC

    No full text
    International audienceThe ATLAS trigger system is composed of three levels: an initial hardware trigger level (LVL1) followed by two software-based stages (LVL2 trigger and event filter) included in the high level trigger (HLT) and implemented on processor farms. The LVL2 trigger starts from LVL1 information concerning pointers to restricted so-called regions of interest (ROI) and performs event selection by means of optimized algorithms. If the LVL2 is passed, the full event is built and sent to the event filter (EF) algorithms for further selection and classification. After that, events are finally collected and put into mass storage for subsequent physics analysis. Even if many differences arise in the requirements and in the interfaces between the two HLT stages, they have a coherent approach to event selection. Therefore, the design of a common core software framework has been implemented in order to allow the HLT architecture to be flexible to changes (background conditions, luminosity, description of the detector, etc.). Algorithms working in the event filter are designed to work not only in a general purpose or exclusive mode, but they have been implemented in such a way to process given trigger hypotheses produced at a previous stage in the HLT dataflow (seeding concept). This is done by acting in separate steps, so that decisions to go further in the process are taken at every new step. An overview of the HLT processing steps is given and the working principles of the EF offline algorithms for muon reconstruction and identification (MOORE and MuId) are discussed in deeper detail. The reconstruction performances of these algorithms in terms of efficiency, momentum resolution, rejection power and execution times on several samples of simulated single muon events are presented, also taking into account the high background environment that is expected for ATLAS

    Implementation and performance of the high level trigger electron and photon selection for the ATLAS experiment at the LHC

    No full text
    International audienceThe ATLAS experiment at the Large Hadron Collider (LHC) will face the challenge of efficiently selecting interesting candidate events in pp collisions at 14 TeV center of mass energy, while rejecting the enormous number of background events, stemming from an interaction rate of up to 10/sup 9/ Hz. The Level1 trigger will reduce this rate to around /spl Oscr/(100kHz). Subsequently, the high level trigger (HLT), which is comprised of the Second Level trigger and the Event Filter, will need to reduce this rate further by a factor of /spl Oscr/(10/sup 3/). The HLT selection is software based and will be implemented on commercial CPUs using a common framework built on the standard ATLAS object oriented software architecture. In this paper an overview of the current implementation of the selection for electrons and photons in the HLT is given. The performance of this implementation has been evaluated using Monte Carlo simulations in terms of the efficiency for the signal channels, rate expected for the selection, data preparation times, and algorithm execution times. Besides the efficiency and rate estimates, some physics examples will be discussed, showing that the triggers are well adapted for the physics programme envisaged at LHC. The electron and photon trigger software is also being exercised at the ATLAS 2004 Combined Test Beam, where components from all ATLAS subdetectors are taking data together along the H8 SPS extraction line; from these tests a validation of the selection architecture chosen in a real on-line environment is expected

    Implementation and performance of the third level muon trigger of the ATLAS experiment at LHC

    No full text
    The trigger system of the ATLAS experiment at the LHC aims at a high selectivity in order to keep the full physics potential while reducing the 40 MHz initial event rate imposed by the LHC bunch crossing down to /spl sim/100 Hz, as required by the data acquisition system. Algorithms working in the final stage of the trigger environment (Event Filter) are implemented to run both in a "wrapped" mode (reconstructing tracks in the entire Muon Spectrometer) and in a "seeded" mode (according to a dedicated strategy that performs pattern recognition only in regions of the detector where trigger hypotheses have been produced at earlier stages). The working principles of the offline muon reconstruction and identification algorithms (MOORE and MuId) implemented and used in the framework of the Event Filter are discussed in this paper. The reconstruction performance of these algorithms is presented for both modes in terms of efficiency, momentum resolution, rejection power and execution times on several samples of simulated single muon events, also taking into account the high background environment expected for ATLAS

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Implementation and Performance of the Event Filter Muon Selection for the ATLAS experiment at LHC

    No full text
    The ATLAS Trigger system is composed of three levels: an initial hardware trigger level (LVL1) followed by two software-based stages (LVL2 trigger and Event Filter) included in the High Level Trigger (HLT) and implemented on processor farms. The LVL2 trigger starts from LVL1 information concerning pointers to restricted so-called Regions of Interest (ROI) and performs event selection by means of optimized algorithms. If the LVL2 is passed, the full event is built and sent to the Event Filter (EF) algorithms for further selection and classification. After that, events are finally collected and put into mass storage for subsequent physics analysis. Even if many differences arise in the requirements and in the interfaces between the two HLT stages, they have a coherent approach to event selection. Therefore, the design of a common core software framework has been implemented in order to allow the HLT architecture to be flexible to changes (background conditions, luminosity, description of the detector, etc.). Algorithms working in the Event Filter are designed to work not only in a general purpose or exclusive mode, but they have been implemented in such a way to process given trigger hypotheses produced at a previous stage in the HLT dataflow. This is done by acting in separate steps, so that decisions to go further in the process are taken at every new step. An overview of the HLT processing steps is given and the working principles of the EF offline algorithms for muon reconstruction and identification (MOORE and MuId) are discussed in deeper detail. The reconstruction performances of these algorithms in terms of efficiency, momentum resolution, rejection power and execution times on several samples of simulated single muon events are presented, also taking into account the high background environment that is expected for ATLAS

    Online Muon Reconstruction in the ATLAS Level-2 trigger system

    No full text
    To cope with the 40 MHz event production rate of LHC, the trigger of the ATLAS experiment selects the events in three sequential steps of increasing complexity and accuracy whose final results are close to the offline reconstruction. The Level-1, implemented with custom hardware, identifies physics objects within Regions of Interests and operates a first reduction of the event rate to 75 KHz. The higher trigger levels provide a software based event selection which further reduces the event rate to about 100 Hz. This paper presents the algorithm (muFast) employed at Level-2 to confirm the muon candidates flagged by the Level-1. muFast identifies hits of muon tracks inside the Muon Spectrometer and provides a precise measurement of the muon momentum at the production vertex. The algorithm must process the Level-1 muon output rate (~20 KHz), thus a particular care has been used for its optimization. The result is a very fast track reconstruction algorithm with good physics performances which, in some cases, approach those of the offline reconstruction: it computes the pT of prompt muons with a resolution of 5.5% at 6 GeV and 4.0% at 20 GeV and with an efficiency of about 95%. The algorithm requires an overall execution time of ~1 ms on a 100 SpecInts95 machine

    A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer

    No full text
    The ATLAS Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone feature extraction) then, other detector data are used to refine the extracted features. The “µFast” algorithm performs the standalone feature extraction, providing a first reduction of the muon event rate from Level-1. It confirms muon track candidates with a precise measurement of the muon momentum. The algorithm is designed to be both conceptually simple and fast so as to be readily implemented in the demanding online environment in which the Level-2 selection code will run. Never-the-less its physics performance approaches, in some cases, that of the offline reconstruction algorithms. This paper describes the implemented algorithm together with the software techniques employed to increase its timing performances

    Design, deployment and functional tests of the on-line Event Filter for the ATLAS experiment at LHC

    No full text
    The Event Filter selection stage is a fundamental component of the ATLAS Trigger and Data Acquisition architecture. Its primary function is the reduction of data flow and rate to values acceptable by the mass storage operations and by the subsequent off-line data reconstruction and analysis steps. The computing instrument of the EF is generally organized as a set of independent sub-farms, each connected to one output of the Event Builder switch fabric. Each sub-farm comprises a number of processors analyzing several complete events in parallel. This paper describes the design of the ATLAS EF system, its deployment in the 2004 ATLAS combined test beam together with some examples of integrating selection and monitoring algorithms. Since the processing algorithms are not specially designed for EF but are inherited as much as possible from the off-line ones, special emphasis is reserved to system reliability and data security, in particular for the case of failures in the processing algorithms. Another key design element has been system modularity and scalability. The EF shall be able to follow technology evolution and should allow for using additional processing resources possibly remotely located

    Portable Gathering System for Monitoring and Online Calibration at ATLAS

    No full text
    During the runtime of any experiment, a central monitoring system that detects problems as soon as they appear has an essential role. In a large experiment, like ATLAS, the online data acquisition system is distributed across the nodes of large farms, each of them running several processes that analyse a fraction of the events. In this architecture, it is necessary to have a central process that collects all the monitoring data from the different nodes, produces full statistics histograms and analyses them. In this paper we present the design of such a system, called the gatherer. It allows to collect any monitoring object, such as histograms, from the farm nodes, from any process in the DAQ, trigger and reconstruction chain. It also adds up the statistics, if required, and processes user defined algorithms in order to analyse the monitoring data. The results are sent to a centralized display, that shows the information online, and to the archiving system, triggering alarms in case of problems. The innovation of this system is that conceptually it abstracts several underlying communication protocols, being able to talk with different processes using different protocols at the same time and, therefore, providing maximum flexibility. The software is easily adaptable to any trigger-DAQ system. The first prototype of the gathering system has been implemented for ATLAS and has been running during this year's combined test beam. An evaluation of this first prototype will also be presented

    Implementation and Performance of the Seeded Reconstruction for the ATLAS Event Filter Selection Software

    No full text
    ATLAS is one of the four LHC experiments that will start data taking in 2007, designed to cover a wide range of physics topics. The ATLAS trigger system has to cope with a rate of 40 MHz and 23 interactions per bunch crossing. It is divided in three different levels. The first one (hardware based) provides a signature that is confirmed by the the following trigger levels (software based) by running a sequence of algorithms and validating the signal step by step, looking only to the region of the space indicated by the first trigger level (seeding). In this paper, the performance of one of these sequences that run at the Event Filter level (third level) and is composed of clustering at the calorimeter, track reconstruction and matching
    corecore