84 research outputs found
Primordial Nucleosynthesis for the New Cosmology: Determining Uncertainties and Examining Concordance
Big bang nucleosynthesis (BBN) and the cosmic microwave background (CMB) have
a long history together in the standard cosmology. The general concordance
between the predicted and observed light element abundances provides a direct
probe of the universal baryon density. Recent CMB anisotropy measurements,
particularly the observations performed by the WMAP satellite, examine this
concordance by independently measuring the cosmic baryon density. Key to this
test of concordance is a quantitative understanding of the uncertainties in the
BBN light element abundance predictions. These uncertainties are dominated by
systematic errors in nuclear cross sections. We critically analyze the cross
section data, producing representations that describe this data and its
uncertainties, taking into account the correlations among data, and explicitly
treating the systematic errors between data sets. Using these updated nuclear
inputs, we compute the new BBN abundance predictions, and quantitatively
examine their concordance with observations. Depending on what deuterium
observations are adopted, one gets the following constraints on the baryon
density: OmegaBh^2=0.0229\pm0.0013 or OmegaBh^2 = 0.0216^{+0.0020}_{-0.0021} at
68% confidence, fixing N_{\nu,eff}=3.0. Concerns over systematics in helium and
lithium observations limit the confidence constraints based on this data
provide. With new nuclear cross section data, light element abundance
observations and the ever increasing resolution of the CMB anisotropy, tighter
constraints can be placed on nuclear and particle astrophysics. ABRIDGEDComment: 54 pages, 20 figures, 5 tables v2: reflects PRD version minor changes
to text and reference
Measurements of and production in protonâproton interactions at in the NA61/SHINEÂ experiment
Double-differential yields of and
resonances produced in \pp interactions
were measured at a laboratory beam momentum of 158~\GeVc. This measurement is
the first of its kind in \pp interactions below LHC energies. It was performed
at the CERN SPS by the \NASixtyOne collaboration. Double-differential
distributions in rapidity and transverse momentum were obtained from a sample
of 2610 inelastic events. The spectra are extrapolated to full phase
space resulting in mean multiplicity of (6.73
0.25 0.67) and (2.71
0.18 0.18). The rapidity and transverse momentum
spectra and mean multiplicities were compared to predictions of string-hadronic
and statistical model calculations
Measurements of and production in protonâproton interactions at in the NA61/SHINEÂ experiment
International audienceThe production of and hyperons in inelastic p+p interactions is studied in a fixed target experiment at a beam momentum of 158Â . Double differential distributions in rapidity and transverse momentum are obtained from a sample of 33M inelastic events. They allow to extrapolate the spectra to full phase space and to determine the mean multiplicity of both and . The rapidity and transverse momentum spectra are compared to transport model predictions. The mean multiplicity in inelastic p+p interactions at 158Â is used to quantify the strangeness enhancement in A+A collisions at the same centre-of-mass energy per nucleon pair
Highly-parallelized simulation of a pixelated LArTPC on a GPU
The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
- âŠ