2,726 research outputs found
How stable are transport model results to changes of resonance parameters? A UrQMD model study
The Ultrarelativistic Quantum Molecular Dynamics [UrQMD] model is widely used
to simulate heavy ion collisions in broad energy ranges. It consists of various
components to implement the different physical processes underlying the
transport approach. A major building block are the shared tables of constants,
implementing the baryon masses and widths. Unfortunately, many of these input
parameters are not well known experimentally. In view of the upcoming physics
program at FAIR, it is therefore of fundamental interest to explore the
stability of the model results when these parameters are varied. We perform a
systematic variation of particle masses and widths within the limits proposed
by the particle data group (or up to 10%). We find that the model results do
only weakly depend on the variation of these input parameters. Thus, we
conclude that the present implementation is stable with respect to the
modification of not yet well specified particle parameters
Lattice QCD based on OpenCL
We present an OpenCL-based Lattice QCD application using a heatbath algorithm
for the pure gauge case and Wilson fermions in the twisted mass formulation.
The implementation is platform independent and can be used on AMD or NVIDIA
GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double
precision dslash implementation performs at 60 GFLOPS over a wide range of
lattice sizes. The hybrid Monte-Carlo presented reaches a speedup of four over
the reference code running on a server CPU.Comment: 19 pages, 11 figure
Fast TPC Online Tracking on GPUs and Asynchronous Data Processing in the ALICE HLT to facilitate Online Calibration
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at
the Large Hadron Collider (LHC) at CERN, which is today the most powerful
particle accelerator worldwide. The High Level Trigger (HLT) is an online
compute farm of about 200 nodes, which reconstructs events measured by the
ALICE detector in real-time. The HLT uses a custom online data-transport
framework to distribute data and workload among the compute nodes. ALICE
employs several calibration-sensitive subdetectors, e.g. the TPC (Time
Projection Chamber). For a precise reconstruction, the HLT has to perform the
calibration online. Online-calibration can make certain Offline calibration
steps obsolete and can thus speed up Offline analysis. Looking forward to ALICE
Run III starting in 2020, online calibration becomes a necessity. The main
detector used for track reconstruction is the TPC. Reconstructing the
trajectories in the TPC is the most compute-intense step during event
reconstruction. Therefore, a fast tracking implementation is of great
importance. Reconstructed TPC tracks build the basis for the calibration making
a fast online-tracking mandatory. We present several components developed for
the ALICE High Level Trigger to perform fast event reconstruction and to
provide features required for online calibration. As first topic, we present
our TPC tracker, which employs GPUs to speed up the processing, and which bases
on a Cellular Automaton and on the Kalman filter. Our TPC tracking algorithm
has been successfully used in 2011 and 2012 in the lead-lead and the
proton-lead runs. We have improved it to leverage features of newer GPUs and we
have ported it to support OpenCL, CUDA, and CPUs with a single common source
code. This makes us vendor independent. As second topic, we present framework
extensions required for online calibration. ...Comment: 8 pages, 6 figures, contribution to CHEP 2015 conferenc
BioEM: GPU-accelerated computing of Bayesian inference of electron microscopy images
In cryo-electron microscopy (EM), molecular structures are determined from
large numbers of projection images of individual particles. To harness the full
power of this single-molecule information, we use the Bayesian inference of EM
(BioEM) formalism. By ranking structural models using posterior probabilities
calculated for individual images, BioEM in principle addresses the challenge of
working with highly dynamic or heterogeneous systems not easily handled in
traditional EM reconstruction. However, the calculation of these posteriors for
large numbers of particles and models is computationally demanding. Here we
present highly parallelized, GPU-accelerated computer software that performs
this task efficiently. Our flexible formulation employs CUDA, OpenMP, and MPI
parallelization combined with both CPU and GPU computing. The resulting BioEM
software scales nearly ideally both on pure CPU and on CPU+GPU architectures,
thus enabling Bayesian analysis of tens of thousands of images in a reasonable
time. The general mathematical framework and robust algorithms are not limited
to cryo-electron microscopy but can be generalized for electron tomography and
other imaging experiments
- …