137 research outputs found

    Explicit finite-difference and direct-simulation-MonteCarlo method for the dynamics of mixed Bose-condensate and cold-atom clouds

    Full text link
    We present a new numerical method for studying the dynamics of quantum fluids composed of a Bose-Einstein condensate and a cloud of bosonic or fermionic atoms in a mean-field approximation. It combines an explicit time-marching algorithm, previously developed for Bose-Einstein condensates in a harmonic or optical-lattice potential, with a particle-in-cell MonteCarlo approach to the equation of motion for the one-body Wigner distribution function in the cold-atom cloud. The method is tested against known analytical results on the free expansion of a fermion cloud from a cylindrical harmonic trap and is validated by examining how the expansion of the fermionic cloud is affected by the simultaneous expansion of a condensate. We then present wholly original calculations on a condensate and a thermal cloud inside a harmonic well and a superposed optical lattice, by addressing the free expansion of the two components and their oscillations under an applied harmonic force. These results are discussed in the light of relevant theories and experiments.Comment: 33 pages, 13 figures, 1 tabl

    Experimental progress in positronium laser physics

    Get PDF

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    • 

    corecore