57 research outputs found

    Identification and reconstruction of low-energy electrons in the ProtoDUNE-SP detector

    Full text link
    Measurements of electrons from νe\nu_e interactions are crucial for the Deep Underground Neutrino Experiment (DUNE) neutrino oscillation program, as well as searches for physics beyond the standard model, supernova neutrino detection, and solar neutrino measurements. This article describes the selection and reconstruction of low-energy (Michel) electrons in the ProtoDUNE-SP detector. ProtoDUNE-SP is one of the prototypes for the DUNE far detector, built and operated at CERN as a charged particle test beam experiment. A sample of low-energy electrons produced by the decay of cosmic muons is selected with a purity of 95%. This sample is used to calibrate the low-energy electron energy scale with two techniques. An electron energy calibration based on a cosmic ray muon sample uses calibration constants derived from measured and simulated cosmic ray muon events. Another calibration technique makes use of the theoretically well-understood Michel electron energy spectrum to convert reconstructed charge to electron energy. In addition, the effects of detector response to low-energy electron energy scale and its resolution including readout electronics threshold effects are quantified. Finally, the relation between the theoretical and reconstructed low-energy electron energy spectrum is derived and the energy resolution is characterized. The low-energy electron selection presented here accounts for about 75% of the total electron deposited energy. After the addition of lost energy using a Monte Carlo simulation, the energy resolution improves from about 40% to 25% at 50~MeV. These results are used to validate the expected capabilities of the DUNE far detector to reconstruct low-energy electrons.Comment: 19 pages, 10 figure

    Impact of cross-section uncertainties on supernova neutrino spectral parameter fitting in the Deep Underground Neutrino Experiment

    Get PDF
    A primary goal of the upcoming Deep Underground Neutrino Experiment (DUNE) is to measure the O(10)\mathcal{O}(10) MeV neutrinos produced by a Galactic core-collapse supernova if one should occur during the lifetime of the experiment. The liquid-argon-based detectors planned for DUNE are expected to be uniquely sensitive to the νe\nu_e component of the supernova flux, enabling a wide variety of physics and astrophysics measurements. A key requirement for a correct interpretation of these measurements is a good understanding of the energy-dependent total cross section σ(Eν)\sigma(E_\nu) for charged-current νe\nu_e absorption on argon. In the context of a simulated extraction of supernova νe\nu_e spectral parameters from a toy analysis, we investigate the impact of σ(Eν)\sigma(E_\nu) modeling uncertainties on DUNE's supernova neutrino physics sensitivity for the first time. We find that the currently large theoretical uncertainties on σ(Eν)\sigma(E_\nu) must be substantially reduced before the νe\nu_e flux parameters can be extracted reliably: in the absence of external constraints, a measurement of the integrated neutrino luminosity with less than 10\% bias with DUNE requires σ(Eν)\sigma(E_\nu) to be known to about 5%. The neutrino spectral shape parameters can be known to better than 10% for a 20% uncertainty on the cross-section scale, although they will be sensitive to uncertainties on the shape of σ(Eν)\sigma(E_\nu). A direct measurement of low-energy νe\nu_e-argon scattering would be invaluable for improving the theoretical precision to the needed level.Comment: 25 pages, 21 figure

    Performance of a modular ton-scale pixel-readout liquid argon time projection chamber

    Get PDF
    The Module-0 Demonstrator is a single-phase 600 kg liquid argon time projection chamber operated as a prototype for the DUNE liquid argon near detector. Based on the ArgonCube design concept, Module-0 features a novel 80k-channel pixelated charge readout and advanced high-coverage photon detection system. In this paper, we present an analysis of an eight-day data set consisting of 25 million cosmic ray events collected in the spring of 2021. We use this sample to demonstrate the imaging performance of the charge and light readout systems as well as the signal correlations between the two. We also report argon purity and detector uniformity measurements and provide comparisons to detector simulations

    Identification and reconstruction of low-energy electrons in the ProtoDUNE-SP detector

    Get PDF
    Measurements of electrons from νe interactions are crucial for the Deep Underground Neutrino Experiment (DUNE) neutrino oscillation program, as well as searches for physics beyond the standard model, supernova neutrino detection, and solar neutrino measurements. This article describes the selection and reconstruction of low-energy (Michel) electrons in the ProtoDUNE-SP detector. ProtoDUNE-SP is one of the prototypes for the DUNE far detector, built and operated at CERN as a charged particle test beam experiment. A sample of low-energy electrons produced by the decay of cosmic muons is selected with a purity of 95%. This sample is used to calibrate the low-energy electron energy scale with two techniques. An electron energy calibration based on a cosmic ray muon sample uses calibration constants derived from measured and simulated cosmic ray muon events. Another calibration technique makes use of the theoretically well-understood Michel electron energy spectrum to convert reconstructed charge to electron energy. In addition, the effects of detector response to low-energy electron energy scale and its resolution including readout electronics threshold effects are quantified. Finally, the relation between the theoretical and reconstructed low-energy electron energy spectra is derived, and the energy resolution is characterized. The low-energy electron selection presented here accounts for about 75% of the total electron deposited energy. After the addition of lost energy using a Monte Carlo simulation, the energy resolution improves from about 40% to 25% at 50 MeV. These results are used to validate the expected capabilities of the DUNE far detector to reconstruct low-energy electrons

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    A new quality control procedure based on non-linear autoregressive neural network for validating raw river stage data

    No full text
    The main purpose of this work is the develop of a new quality control method based on non-linear autoregressive neural networks (NARNN) for validating hydrological information, more specifically of 10-min river stage data, for automatic detection of incorrect records. To assess the effectiveness of this new approach, a comparison with adapted conventional validation tests extensively used for hydro-meteorological data was carried out. Different parameters of NARNN and their stability were also analyzed in order to select the most appropriate configuration for obtaining the optimal performance. A set of errors of different magnitudes was artificially introduced into the dataset to evaluate detection efficiency. The NARNN method detected more than 90% of altered records, when the magnitude of error introduced was very high, while conventional tests detected only around 13%. In addition, the NARNN method maintained a similar efficiency at the intermediate and lower error ratios, while the conventional tests were not able to detect more than 6% of erroneous data. © 2013.Peer Reviewe

    Design otimizado de redes neurais para um sistema de previsão do nível da água do rio

    No full text
    In this paper, a Multi-Objective Genetic Algorithm (MOGA) framework for the design of Artificial Neural Network (ANN) models is used to design 1-step-ahead prediction models of river water levels. The design procedure is a near-automatic method that, given the data at hand, can partition it into datasets and is able to determine a near-optimal model with the right topology and inputs, offering a good performance on unseen data, i.e., data not used for model design. An example using more than 11 years of water level data (593,178 samples) of the Carrión river collected at Villoldo gauge station shows that the MOGA framework can obtain low-complex models with excellent performance on unseen data, achieving an RMSE of 2.5 × 10−3 , which compares favorably with results obtained by alternative design.info:eu-repo/semantics/publishedVersio

    Memorias de Hacienda y del Tesoro y de la Nueva Granada y Colombia, siglo XIX

    No full text
    En este libro digital el Banco de la República pone a disposición de los investigadores y bibliotecas del país la totalidad de las memorias de Hacienda y Tesoro colombianas que se publicaron en el siglo XIX. Es una herramienta bibliográfica que esperamos sea de enorme utilidad para los profesionales, estudiantes y amantes de la historia de Colombia. El célebre economista austriaco Joseph Schumpeter sostenía que las finanzas públicas son uno de los mejores puntos de partida para investigar una sociedad: allí se reflejan la riqueza, su distribución territorial y sectorial, la marcha de la actividad económica, las prioridades en el gasto y sus beneficiarios. Por su relevancia, y por cuanto los ciudadanos quieren saber en qué se gastan los impuestos que han pagado, los gobiernos democráticos han sido cuidadosos en la rendición de informes periódicos sobre los recaudos, los gastos, los excedentes o déficits en las finanzas gubernamentales y las formas de cubrir estos últimos. Para ello, con cierta regularidad se presentan informes a los órganos de control político o administrativo. Por esa razón, en el Banco de la República hemos considerado que un aporte bibliográfico de gran relevancia para el estudio de la historia económica nacional es poner a disposición de los investigadores y bibliotecas del país esta edición digital de las memorias de Hacienda y Tesoro del siglo XIX
    corecore