448 research outputs found

    Dethinning Extensive Air Shower Simulations

    Full text link
    We describe a method for restoring information lost during statistical thinning in extensive air shower simulations. By converting weighted particles from thinned simulations to swarms of particles with similar characteristics, we obtain a result that is essentially identical to the thinned shower, and which is very similar to non-thinned simulations of showers. We call this method dethinning. Using non-thinned showers on a large scale is impossible because of unrealistic CPU time requirements, but with thinned showers that have been dethinned, it is possible to carry out large-scale simulation studies of the detector response for ultra-high energy cosmic ray surface arrays. The dethinning method is described in detail and comparisons are presented with parent thinned showers and with non-thinned showers

    Computational Particle Physics for Event Generators and Data Analysis

    Full text link
    High-energy physics data analysis relies heavily on the comparison between experimental and simulated data as stressed lately by the Higgs search at LHC and the recent identification of a Higgs-like new boson. The first link in the full simulation chain is the event generation both for background and for expected signals. Nowadays event generators are based on the automatic computation of matrix element or amplitude for each process of interest. Moreover, recent analysis techniques based on the matrix element likelihood method assign probabilities for every event to belong to any of a given set of possible processes. This method originally used for the top mass measurement, although computing intensive, has shown its power at LHC to extract the new boson signal from the background. Serving both needs, the automatic calculation of matrix element is therefore more than ever of prime importance for particle physics. Initiated in the eighties, the techniques have matured for the lowest order calculations (tree-level), but become complex and CPU time consuming when higher order calculations involving loop diagrams are necessary like for QCD processes at LHC. New calculation techniques for next-to-leading order (NLO) have surfaced making possible the generation of processes with many final state particles (up to 6). If NLO calculations are in many cases under control, although not yet fully automatic, even higher precision calculations involving processes at 2-loops or more remain a big challenge. After a short introduction to particle physics and to the related theoretical framework, we will review some of the computing techniques that have been developed to make these calculations automatic. The main available packages and some of the most important applications for simulation and data analysis, in particular at LHC will also be summarized.Comment: 19 pages, 11 figures, Proceedings of CCP (Conference on Computational Physics) Oct. 2012, Osaka (Japan) in IOP Journal of Physics: Conference Serie

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Monte Carlo simulations for the Pierre Auger Observatory using the VO auger grid resources

    Get PDF

    Monte Carlo simulations for the Pierre Auger Observatory using the VO auger grid resources

    Get PDF
    The Pierre Auger Observatory, located near Malargüe, Argentina, is the world’s largest cosmic-ray detector. It comprises a 3000 km2 surface detector and 27 fluorescence telescopes, which measure the lateral and longitudinal distributions of the many millions of air-shower particles produced in the interactions initiated by a cosmic ray in the Earth’s atmosphere. The determination of the nature of cosmic rays and studies of the detector performances rely on extensive Monte Carlo simulations describing the physics processes occurring in extensive air showers and the detector responses. The aim of the Monte Carlo simulations task is to produce and provide the Auger Collaboration with reference libraries used in a wide variety of analyses. All multipurpose detector simulations are currently produced in local clusters using Slurm and HTCondor. The bulk of the shower simulations are produced on the grid, via the Virtual Organization auger, using the DIRAC middleware. The job submission is made via python scripts using the DIRAC-API. The Auger site is undergoing a major upgrade, which includes the installation of new types of detectors, demanding increased simulation resources. The novel detection of the radio component of extensive air showers is the most challenging endeavor, requiring dedicated shower simulations with very long computation times, not optimized for the grid production. For data redundancy, the simulations are stored on the Lyon server and the grid Disk Pool Manager and are accessible to the Auger members via iRODS and DIRAC, respectively. The CERN VM-File System is used for software distribution where, soon, the Auger Offline software will also be made available

    Computational Methods in Science and Engineering : Proceedings of the Workshop SimLabs@KIT, November 29 - 30, 2010, Karlsruhe, Germany

    Get PDF
    In this proceedings volume we provide a compilation of article contributions equally covering applications from different research fields and ranging from capacity up to capability computing. Besides classical computing aspects such as parallelization, the focus of these proceedings is on multi-scale approaches and methods for tackling algorithm and data complexity. Also practical aspects regarding the usage of the HPC infrastructure and available tools and software at the SCC are presented

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of ÎĽs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers
    • …
    corecore