1,184 research outputs found
Towards the 3D-Imaging of Sources
Geometric details of a nuclear reaction zone, at the time of particle
emission, can be restored from low relative-velocity particle-correlations,
following imaging. Some of the source details get erased and are a potential
cause of problems in the imaging, in the form of instabilities. These can be
coped with by following the method of discretized optimization for the restored
sources. So far it has been possible to produce 1-dimensional emission source
images, corresponding to the reactions averaged over all possible spatial
directions. Currently, efforts are in progress to restore angular details.Comment: Talk given at the Int. Workshop on Hot and Dense Matter in
Relativistic Heavy Ion Collisions, March 24-27, 2004, Budapest; 10 pages, 6
figure
Exploring Lifetime Effects in Femtoscopy
We investigate the role of lifetime effects from resonances and emission
duration tails in femtoscopy at RHIC in two Blast-Wave models. We find the
non-Gaussian components compare well with published source imaged data, but the
value of R_out obtained from Gaussian fits is not insensitive to the
non-Gaussian contributions when realistic acceptance cuts are applied to
models.Comment: 5 pages, 2 figure
Recommended from our members
Ferrenberg Swendsen Analysis of LLNL and NYBlue BG/L p4rhms Data
These results are from the continuing Lattice Quantum Chromodynamics runs on BG/L. These results are from the Ferrenberg-Swendsen analysis [?] of the combined data from LLNL and NYBlue BG/L runs for 32{sup 3} x 8 runs with the p4rhmc v2.0 QMP-MPI.X (semi-optimized p4 code using qmp over mpi). The jobs include beta values ranging from 3.525 to 3.535 with an alternate analysis extending to 3.540. The NYBlue data sets are from 9k trajectories from Oct 2007, and the LLNL data are from two independent streams of {approx}5k each, taking from the July 2007 runs. The following outputs are produced by the fs-2+1-chiub.c program. All outputs have had checksums produced by addCks.pl and checked by the checkCks.pl perl script after scanning
Kaon and production vs Participants in Nuclear Collisions
Data on kaon and production in nuclear collisions as a function of
centrality are analysed both at AGS and SPS energy range. We compare the
results of several experiments, looking for common trend in `participant
scaling' of production yields. We find a smooth description of scaled kaon and
yields as a function of participant density. We also show a participant
density dependence of kaons and produced in the forward hemisphere for
proton-nucleus collisions.Comment: Proceedings of the International Conference on Strangeness in Quark
Matter, 20-25 July 2000, Berkeley, CA. To appear in Journal of Physics G:
Nuclear and Particle Physic
Recommended from our members
Review of Systematic Investigations of the Rout/Rside ratio in HBT at RHIC
We review the significant difference in the ratio R{sub out}/R{sub side} between experiment and theory in heavy-ion collisions at RHIC. This ratio is expected to be strongly correlated with the pion emission duration. Hydrodynamic models typically calculate a value that approximately equal to 1.5 and moderately dependent on k{sub T} whereas the experiments report a value close to unity and independent of k{sub T}. We review those calculations in which systematic variations in the theoretical assumptions were reported. We find that the scenario of second order phase transition or cross-over has been given insufficient attention, and may play an important role in resolving this discrepancy
Recommended from our members
First LQCD Physics Runs with MILC and P4RHMC
An initial series of physics LQCD runs were submitted to the BG/L science bank with the milc and p4rhmc. Both runs were for lattice dimensions of 32{sup 2} x 8. The p4 calculation was performed with v2.0 QMP{_}MPI.X (semioptomized p4 code using qmp over mpi) and milc v7.2, also using RHMC, but not specifically optimized for BlueGene. Calculations were performed along lines of constant physics, with the light quark masses 2-3 times their physics values and the strange quark mass set by m{sub ud} = 0.1m{sub s}. Job submissions was performed using the standard milc and p4 scripts provided on the ubgl cluster. Initial thermalized lattices for each code were also provided in this way. The only modifications for running on BG/L were to the directory names and the mT parameter which determines job durations (24 hrs on BG/L vs. 4 hrs on ubgl). The milc scripts were set to resubmit themselves 10 times, and the p4 scripts were submitted serially using the ''psub -d'' job dependency option. The runp4rhmc.tcsh could not be used to resubmit due to the 30m time limit imposed on interactive jobs. Most jobs were submitted to the smallest, 512 node partitions, but both codes could also run on the 1024 node partitions with a gain of only 30-50%. The majority of jobs ran without error. Stalled jobs were often indicative of a communication gap within a partition that LC was able to fix quickly. On some occasion a zero-length lattice file was deleted to allow jobs to restart successfully. Approximately 1000 trajectories were calculated for each beta value, see Table . The analysis was performed with the standard analysis scripts for each code, make{_}summary.pl for milc and analysis.tcsh for p4rhmc. All lattices, log files, and job submission scripts have been archived to permanent storage for subsequent analysis
Constraining the initial temperature and shear viscosity in a hybrid hydrodynamic model of =200 GeV Au+Au collisions using pion spectra, elliptic flow, and femtoscopic radii
A new framework for evaluating hydrodynamic models of relativistic heavy ion
collisions has been developed. This framework, a Comprehesive Heavy Ion Model
Evaluation and Reporting Algorithm (CHIMERA) has been implemented by augmenting
UVH 2+1D viscous hydrodynamic model with eccentricity fluctuations,
pre-equilibrium flow, and the Ultra-relativistic Quantum Molecular Dynamic
(UrQMD) hadronic cascade. A range of initial temperatures and shear viscosity
to entropy ratios were evaluated for four initial profiles, and
scaling with and without pre-equilibrium flow. The model results
were compared to pion spectra, elliptic flow, and femtoscopic radii from 200
GeV Au+Au collisions for the 0--20% centrality range.Two sets of initial
density profiles, scaling with pre-equilibrium flow and
scaling without were shown to provide a consistent description of all three
measurements.Comment: 21 pages, 32 figures, version 3 includes additional text for
clarification, division of figures into more manageable units, and placement
of chi-squared values in tables for ease of viewin
Recommended from our members
Lattice QCD Thermodynamics First 5000 Trajectories
These results represent the first LQCD analysis for approximately 5000 trajectories with each of the p4rhmc and milc codes, with some of the lower temperature runs having fewer. Both runs were for lattice dimensions of 32{sup 3}x8. Some 32{sup 4} T=0 jobs were also run for p4rhmc. The p4 calculation was performed with v2.0 QMP{_}MPI.X (semi-optimized p4 code using qmp over mpi) and milc version of the su3 rhmc susc eos executable dated Mar 1, 2007 on ubgl in the /usr/gapps/hip/qcd/milc/bin subdirectory (svn revision 28). As with previous runs, calculations were performed along lines of constant physics, with the light quark masses 2-3 times their physics values and the strange quark mass set by m{sub ud} = 0.1m{sub s}. Job submissions were performed using a new subSet.pl job submission script that locates current jobs and submits additional jobs with the same beta value as pending. Note that after reaching a limit of about 35 jobs subsequent submissions are delayed and will not be submitted directly from that state. The job submission script was used to submit revised versions of the milc and p4rhmc csh scripts. Initial thermalized lattices for each code were also for milc (taken from the firstPhys runs), but the p4rhmc runs include thermalization. The only modifications for running on BG/L were to the directory names and the mT parameter which determines job durations (24 hrs on BG/L vs. 4 hrs on ubgl). All finite temperature jobs were submitted to the 512 node partitions, and all T=0 runs were submitted to 2048 node partitions. The set of runs was plagued by filesystem errors on lscratch1 and lscratcH{sub 2}. Many jobs had to be reset (deleting the most recent output file for milc and/or lattice for p4) and resubmitted. The analysis was performed with a new set of scripts that produced a more condensed output for scanning. All scans were verified with checksums, which have been retained in the output along with the line numbers. All lattices, log files, and job submission scripts have been archived to permanent storage in the 5k subdirectory for subsequent analysis
Recommended from our members
Second update The Gordon Bell Competetion entry gb110s2
Since the update to our entry of October 20th we have just made a significant improvement. We understand that this is past the deadline for updates and very close to the conference date. However, Lawrence Livermore National Laboratory has just updated the BG/L system software on their full 64 BG/L supercomputer to IBM-BGL Release 3. As we discussed in our update of October 20 this release includes our custom L1 and SRAM access functions that allow us to achieve higher sustained performance. Just a few hours ago we got access to the full system and obtained the fastest sustained performance point. In the full 131,072 CPU-cores system QCD sustains 70.9 Teraflops for the Dirac operator and 67.9 teraflops for the full Conjugate Gradient inverter. This is about 20% faster than our last update. We attach the corresponding speedup figure. As you can tell the speedup is perfect. This figure is the same as Figure 1 of our October 20th update except that it now includes the 131,072 CPU-cores point
- …