1,588 research outputs found

    The Topological Processor for the future ATLAS Level-1 Trigger: from design to commissioning

    Full text link
    The ATLAS detector at LHC will require a Trigger system to efficiently select events down to a manageable event storage rate of about 400 Hz. By 2015 the LHC instantaneous luminosity will be increased up to 3 x 10^34 cm-2s-1, this represents an unprecedented challenge faced by the ATLAS Trigger system. To cope with the higher event rate and efficiently select relevant events from a physics point of view, a new element will be included in the Level-1 Trigger scheme after 2015: the Topological Processor (L1Topo). The L1Topo system, currently developed at CERN, will consist initially of an ATCA crate and two L1Topo modules. A high density opto-electroconverter (AVAGO miniPOD) drives up to 1.6 Tb/s of data from the calorimeter and muon detectors into two high-end FPGA (Virtex7-690), to be processed in about 200 ns. The design has been optimized to guarantee excellent signal in- tegrity of the high-speed links and low latency data transmission on the Real Time Data Path (RTDP). The L1Topo receives data in a standalone protocol from the calorimeters and muon detectors to be processed into several VHDL topological algorithms. Those algorithms perform geometrical cuts, correlations and calculate complex observables such as the invariant mass. The output of such topological cuts is sent to the Central Trigger Processor. This talk focuses on the relevant high-density design characteristic of L1Topo, which allows several hundreds optical links to processed (up to 13 Gb/s each) using ordinary PCB material. Relevant test results performed on the L1Topo prototypes to characterize the high-speed links latency (eye diagram, bit error rate, margin analysis) and the logic resource utilization of the algorithms are discussed.Comment: 5 pages, 6 figure

    Data processing and online reconstruction

    Full text link
    In the upcoming upgrades for Run 3 and 4, the LHC will significantly increase Pb--Pb and pp interaction rates. This goes along with upgrades of all experiments, ALICE, ATLAS, CMS, and LHCb, related to both the detectors and the computing. The online processing farms must employ faster, more efficient reconstruction algorithms to cope with the increased data rates, and data compression factors must increase to fit the data in the affordable capacity for permanent storage. Due to different operating conditions and aims, the experiments follow different approaches, but there are several common trends like more extensive online computing and the adoption of hardware accelerators. This paper gives an overview and compares the data processing approaches and the online computing farms of the LHC experiments today in Run 2 and for the upcoming LHC Run 3 and 4.Comment: 6 pages, 0 figures, contribution to LHCP2018 conferenc

    GPU-based Real-time Triggering in the NA62 Experiment

    Full text link
    Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have been analyzed, pointing out that networking is the most critical one. To keep the latency of data transfer task under control, we devised NaNet, an FPGA-based PCIe Network Interface Card (NIC) with GPUDirect capabilities. For the processing task, we developed specific multiple ring trigger algorithms to leverage the parallel architecture of GPUs and increase the processing throughput to keep up with the high event rate. Results obtained during the first months of 2016 NA62 run are presented and discussed

    FPGA-accelerated machine learning inference as a service for particle physics computing

    Full text link
    New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that potentially requires minimal modification to the current computing model. As examples, we retrain the ResNet-50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet-50 model with transfer learning for neutrino event classification. Using Project Brainwave by Microsoft to accelerate the ResNet-50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600--700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.Comment: 16 pages, 14 figures, 2 table

    Trigger and data acquisition

    Full text link
    The lectures address some of the issues of triggering and data acquisition in large high-energy physics experiments. Emphasis is placed on hadron-collider experiments that present a particularly challenging environment for event selection and data collection. However, the lectures also explain how T/DAQ systems have evolved over the years to meet new challenges. Some examples are given from early experience with LHC T/DAQ systems during the 2008 single-beam operations.Comment: 32 pages, Lectures given at the 5th CERN-Latin-American School of High-Energy Physics, Recinto Quirama, Colombia, 15 - 28 Mar 200

    Triggering at High Luminosity Colliders

    Full text link
    This article discusses the techniques used to select online promising events at high energy and high luminosity colliders. After a brief introduction, explaining some general aspects of triggering, the more specific implementation options for well established machines like the Tevatron and Large Hadron Collider are presented. An outlook on what difficulties need to be met is given when designing trigger systems at the Super Large Hadron Collider, or at the International Linear ColliderComment: Accepted for publication in New Journal of Physic
    • …
    corecore