40 research outputs found

    Single-photon Transistors Based on the Interaction of an Emitter and Surface Plasmons

    Get PDF
    A symmetrical approach is suggested (Chang DE et al. Nat Phys 3:807, 2007) to realize a single-photon transistor, where the presence (or absence) of a single incident photon in a ‘gate’ field is sufficient to allow (prevent) the propagation of a subsequent ‘signal’ photon along the nanowire, on condition that the ‘gate’ field is symmetrically incident from both sides of an emitter simultaneously. We present a scheme for single-photon transistors based on the strong emitter-surface-plasmon interaction. In this scheme, coherent absorption of an incoming ‘gate’ photon incident along a nanotip by an emitter located near the tip of the nanotip results in a state flip in the emitter, which controls the subsequent propagation of a ‘signal’ photon in a nanowire perpendicular to the axis of the nanotip

    Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) Conceptual Design Report Volume 2: The Physics Program for DUNE at LBNF

    Get PDF
    The Physics Program for the Deep Underground Neutrino Experiment (DUNE) at the Fermilab Long-Baseline Neutrino Facility (LBNF) is described

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore