753 research outputs found

    The Virtual Monte Carlo

    Full text link
    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, LaTeX, 6 eps figures. PSN THJT006. See http://root.cern.ch/root/vmc/VirtualMC.htm

    GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP

    Full text link
    Full detector simulation was among the largest CPU consumer in all CERN experiment software stacks for the first two runs of the Large Hadron Collider (LHC). In the early 2010's, the projections were that simulation demands would scale linearly with luminosity increase, compensated only partially by an increase of computing resources. The extension of fast simulation approaches to more use cases, covering a larger fraction of the simulation budget, is only part of the solution due to intrinsic precision limitations. The remainder corresponds to speeding-up the simulation software by several factors, which is out of reach using simple optimizations on the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport codes in order to make them benefit from fine-grained parallelism features such as vectorization, but also from increased code and data locality. This paper presents extensively the results and achievements of this R&D, as well as the conclusions and lessons learnt from the beta prototype.Comment: 34 pages, 26 figures, 24 table

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way

    Offloading electromagnetic shower transport to GPUs

    Full text link
    Making general particle transport simulation for high-energy physics (HEP) single-instruction-multiple-thread (SIMT) friendly, to take advantage of accelerator hardware, is an important alternative for boosting the throughput of simulation applications. To date, this challenge is not yet resolved, due to difficulties in mapping the complexity of Geant4 components and workflow to the massive parallelism features exposed by graphics processing units (GPU). The AdePT project is one of the R\&D initiatives tackling this limitation and exploring GPUs as potential accelerators for offloading some part of the CPU simulation workload. Our main target is to implement a complete electromagnetic shower demonstrator working on the GPU. The project is the first to create a full prototype of a realistic electron, positron, and gamma electromagnetic shower simulation on GPU, implemented as either a standalone application or as an extension of the standard Geant4 CPU workflow. Our prototype currently provides a platform to explore many optimisations and different approaches. We present the most recent results and initial conclusions of our work, using both a standalone GPU performance analysis and a first implementation of a hybrid workflow based on Geant4 on the CPU and AdePT on the GPU.Comment: 8 pages, 4 figures, 20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2021), to be published in Journal of Physics: Conference Series, editor Andrei Gheat

    Detector Simulation Challenges for Future Accelerator Experiments

    Get PDF
    Detector simulation is a key component for studies on prospective future high-energy colliders, the design, optimization, testing and operation of particle physics experiments, and the analysis of the data collected to perform physics measurements. This review starts from the current state of the art technology applied to detector simulation in high-energy physics and elaborates on the evolution of software tools developed to address the challenges posed by future accelerator programs beyond the HL-LHC era, into the 2030–2050 period. New accelerator, detector, and computing technologies set the stage for an exercise in how detector simulation will serve the needs of the high-energy physics programs of the mid 21st century, and its potential impact on other research domains

    Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at sNN\sqrt{s_{NN}} = 5.02 TeV

    Full text link
    Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p-Pb collisions at a nucleon-nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum range 0.7 <pT,assoc<pT,trig< < p_{\rm{T}, assoc} < p_{\rm{T}, trig} < 5.0 GeV/cc is examined, to include correlations induced by jets originating from low momen\-tum-transfer scatterings (minijets). The correlations expressed as associated yield per trigger particle are obtained in the pseudorapidity range η<0.9|\eta|<0.9. The near-side long-range pseudorapidity correlations observed in high-multiplicity p-Pb collisions are subtracted from both near-side short-range and away-side correlations in order to remove the non-jet-like components. The yields in the jet-like peaks are found to be invariant with event multiplicity with the exception of events with low multiplicity. This invariance is consistent with the particles being produced via the incoherent fragmentation of multiple parton--parton scatterings, while the yield related to the previously observed ridge structures is not jet-related. The number of uncorrelated sources of particle production is found to increase linearly with multiplicity, suggesting no saturation of the number of multi-parton interactions even in the highest multiplicity p-Pb collisions. Further, the number scales in the intermediate multiplicity region with the number of binary nucleon-nucleon collisions estimated with a Glauber Monte-Carlo simulation.Comment: 23 pages, 6 captioned figures, 1 table, authors from page 17, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/161

    Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm NN}}=2.76 TeV

    Get PDF
    The elliptic, v2v_2, triangular, v3v_3, and quadrangular, v4v_4, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions and (anti-)protons in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm NN}} = 2.76 TeV with the ALICE detector at the Large Hadron Collider. Results obtained with the event plane and four-particle cumulant methods are reported for the pseudo-rapidity range η<0.8|\eta|<0.8 at different collision centralities and as a function of transverse momentum, pTp_{\rm T}, out to pT=20p_{\rm T}=20 GeV/cc. The observed non-zero elliptic and triangular flow depends only weakly on transverse momentum for pT>8p_{\rm T}>8 GeV/cc. The small pTp_{\rm T} dependence of the difference between elliptic flow results obtained from the event plane and four-particle cumulant methods suggests a common origin of flow fluctuations up to pT=8p_{\rm T}=8 GeV/cc. The magnitude of the (anti-)proton elliptic and triangular flow is larger than that of pions out to at least pT=8p_{\rm T}=8 GeV/cc indicating that the particle type dependence persists out to high pTp_{\rm T}.Comment: 16 pages, 5 captioned figures, authors from page 11, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/186

    Centrality dependence of charged particle production at large transverse momentum in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm{NN}}} = 2.76 TeV

    Get PDF
    The inclusive transverse momentum (pTp_{\rm T}) distributions of primary charged particles are measured in the pseudo-rapidity range η<0.8|\eta|<0.8 as a function of event centrality in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm{NN}}}=2.76 TeV with ALICE at the LHC. The data are presented in the pTp_{\rm T} range 0.15<pT<500.15<p_{\rm T}<50 GeV/cc for nine centrality intervals from 70-80% to 0-5%. The Pb-Pb spectra are presented in terms of the nuclear modification factor RAAR_{\rm{AA}} using a pp reference spectrum measured at the same collision energy. We observe that the suppression of high-pTp_{\rm T} particles strongly depends on event centrality. In central collisions (0-5%) the yield is most suppressed with RAA0.13R_{\rm{AA}}\approx0.13 at pT=6p_{\rm T}=6-7 GeV/cc. Above pT=7p_{\rm T}=7 GeV/cc, there is a significant rise in the nuclear modification factor, which reaches RAA0.4R_{\rm{AA}} \approx0.4 for pT>30p_{\rm T}>30 GeV/cc. In peripheral collisions (70-80%), the suppression is weaker with RAA0.7R_{\rm{AA}} \approx 0.7 almost independently of pTp_{\rm T}. The measured nuclear modification factors are compared to other measurements and model calculations.Comment: 17 pages, 4 captioned figures, 2 tables, authors from page 12, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/284

    Effective Rheology of Bubbles Moving in a Capillary Tube

    Full text link
    We calculate the average volumetric flux versus pressure drop of bubbles moving in a single capillary tube with varying diameter, finding a square-root relation from mapping the flow equations onto that of a driven overdamped pendulum. The calculation is based on a derivation of the equation of motion of a bubble train from considering the capillary forces and the entropy production associated with the viscous flow. We also calculate the configurational probability of the positions of the bubbles.Comment: 4 pages, 1 figur
    corecore