10,589 research outputs found

    GPU-based Real-time Triggering in the NA62 Experiment

    Full text link
    Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have been analyzed, pointing out that networking is the most critical one. To keep the latency of data transfer task under control, we devised NaNet, an FPGA-based PCIe Network Interface Card (NIC) with GPUDirect capabilities. For the processing task, we developed specific multiple ring trigger algorithms to leverage the parallel architecture of GPUs and increase the processing throughput to keep up with the high event rate. Results obtained during the first months of 2016 NA62 run are presented and discussed

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    Computational Particle Physics for Event Generators and Data Analysis

    Full text link
    High-energy physics data analysis relies heavily on the comparison between experimental and simulated data as stressed lately by the Higgs search at LHC and the recent identification of a Higgs-like new boson. The first link in the full simulation chain is the event generation both for background and for expected signals. Nowadays event generators are based on the automatic computation of matrix element or amplitude for each process of interest. Moreover, recent analysis techniques based on the matrix element likelihood method assign probabilities for every event to belong to any of a given set of possible processes. This method originally used for the top mass measurement, although computing intensive, has shown its power at LHC to extract the new boson signal from the background. Serving both needs, the automatic calculation of matrix element is therefore more than ever of prime importance for particle physics. Initiated in the eighties, the techniques have matured for the lowest order calculations (tree-level), but become complex and CPU time consuming when higher order calculations involving loop diagrams are necessary like for QCD processes at LHC. New calculation techniques for next-to-leading order (NLO) have surfaced making possible the generation of processes with many final state particles (up to 6). If NLO calculations are in many cases under control, although not yet fully automatic, even higher precision calculations involving processes at 2-loops or more remain a big challenge. After a short introduction to particle physics and to the related theoretical framework, we will review some of the computing techniques that have been developed to make these calculations automatic. The main available packages and some of the most important applications for simulation and data analysis, in particular at LHC will also be summarized.Comment: 19 pages, 11 figures, Proceedings of CCP (Conference on Computational Physics) Oct. 2012, Osaka (Japan) in IOP Journal of Physics: Conference Serie

    Status of CMS and preparations for first physics

    Full text link
    The status of the CMS experiment is described. After a brief review of the detector design and a short overview of the first 5 years of assembly, the focus of this presentation will be the parallel activities of completing and commissioning over the last 2 years and the readiness of CMS for the exciting prospect of first LHC operation.Comment: ICHEP08, Philadelphia, USA, July 2008. 9 page
    • …
    corecore