83 research outputs found

    GPGPU for track finding in High Energy Physics

    Full text link
    The LHC experiments are designed to detect large amount of physics events produced with a very high rate. Considering the future upgrades, the data acquisition rate will become even higher and new computing paradigms must be adopted for fast data-processing: General Purpose Graphics Processing Units (GPGPU) is a novel approach based on massive parallel computing. The intense computation power provided by Graphics Processing Units (GPU) is expected to reduce the computation time and to speed-up the low-latency applications used for fast decision taking. In particular, this approach could be hence used for high-level triggering in very complex environments, like the typical inner tracking systems of the multi-purpose experiments at LHC, where a large number of charged particle tracks will be produced with the luminosity upgrade. In this article we discuss a track pattern recognition algorithm based on the Hough Transform, where a parallel approach is expected to reduce dramatically the execution time.Comment: 6 pages, 4 figures, proceedings prepared for GPU-HEP 2014 conference, submitted to DESY-PROC-201

    The ALICE experiment at the CERN LHC

    Get PDF
    ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 161626 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008

    Received Day Month Year Revised Day Month Year

    No full text
    We have studied a lattice spin model of nematic liquid crystal-polymer composite films by means of extensive Monte Carlo simulations over a distributed computing network. The Condor processing system installed on the Italian Nuclear Physics Institute computer network was used. The use of several geometries and boundary conditions allowed us to investigate a wide number of different realistic or speculative models. Many of the simulations differ only by a small number of parameters and they can be effectively performed in parallel. The results of the simulations can be analyzed globally when all the computations are completed and then for example, employed to extrapolate Phase Diagrams or other complex physical quantities, They provide an effective example of wide area distributed computing applications which could be also implemented in future GRID approaches

    Monte Carlo simulation of the hedgehog defect core in spin systems

    Get PDF
    The core structures of hedgehog point disclinations in Heisenberg magnets and Lebwohl-Lasher nematic liquid crystals have been investigated by Monte Carlo simulation. We find qualitative agreement with the theoretical predictions of Schopohl and Sluckin that the magnetic defects have smaller cores. We also find some evidence for the picture of Penzenstadler and Trebin in which spherical symmetry of the nematic hedgehog core is broken, and the core region can be considered as a disclination line of index 1/2. Finally we make some general comments about the significance of our results in the context of other recent theoretical speculations
    corecore