13 research outputs found

    Neutrino interaction classification with a convolutional neural network in the DUNE far detector

    Get PDF
    Documento escrito por un elevado nĂșmero de autores/as, solo se referencia el/la que aparece en primer lugar y los/as autores/as pertenecientes a la UC3M.The Deep Underground Neutrino Experiment is a next-generation neutrino oscillation experiment that aims to measure CP-violation in the neutrino sector as part of a wider physics program. A deep learning approach based on a convolutional neural network has been developed to provide highly efficient and pure selections of electron neutrino and muon neutrino charged-current interactions. The electron neutrino (antineutrino) selection efficiency peaks at 90% (94%) and exceeds 85% (90%) for reconstructed neutrino energies between 2-5 GeV. The muon neutrino (antineutrino) event selection is found to have a maximum efficiency of 96% (97%) and exceeds 90% (95%) efficiency for reconstructed neutrino energies above 2 GeV. When considering all electron neutrino and antineutrino interactions as signal, a selection purity of 90% is achieved. These event selections are critical to maximize the sensitivity of the experiment to CP-violating effects.This document was prepared by the DUNE Collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. This work was supported by CNPq, FAPERJ, FAPEG and FAPESP, Brazil; CFI, Institute of Particle Physics and NSERC, Canada; CERN; MĆ MT, Czech Republic; ERDF, H2020-EU and MSCA, European Union; CNRS/IN2P3 and CEA, France; INFN, Italy; FCT, Portugal; NRF, South Korea; Comunidad de Madrid, FundaciĂłn "La Caixa" and MICINN, Spain; State Secretariat for Education, Research and Innovation and SNSF, Switzerland; TÜBITAK, Turkey; The Royal Society and UKRI/STFC, United Kingdom; DOE and NSF, United States of America

    ComBos: a complete simulator of volunteer computing and desktop grids

    Get PDF
    Volunteer Computing is a type of distributed computing in which ordinary people donate their idle computer time to science projects like SETI@Home, Climateprediction.net and many others. In a similar way, Desktop Grid Computing is a form of distributed computing in which an organization uses its existing computers to handle its own long-running computational tasks. BOINC is the main middleware that provides a software platform for Volunteer Computing and desktop grid computing, and it became generalized as a platform for distributed applications in areas as diverse as mathematics, medicine, molecular biology, climatology, environmental science, and astrophysics. In this paper we present a complete simulator of BOINC infrastructures, called ComBoS. Although there are other BOINC simulators, none of them allow us to simulate the complete infrastructure of BOINC. Our goal was to create a complete simulator that, unlike the existing ones, could simulate realistic scenarios taking into account the whole BOINC infrastructure, that other simulators do not consider: projects, servers, network, redundant computing, scheduling, and volunteer nodes. The outputs of the simulations allow us to analyze a wide range of statistical results, such as the throughput of each project, the number of jobs executed by the clients, the total credit granted and the average occupation of the BOINC servers. The paper describes the design of ComBoS and the results of the validation performed. This validation compares the results obtained in ComBoS with the real ones of three different BOINC projects (Einstein@Home, SETI@Home and LHC@Home). Besides, we analyze the performance of the simulator in terms of memory usage and execution time. The paper also shows that our simulator can guide the design of BOINC projects, describing some case studies using ComBoS that could help designers verify the feasibility of BOINC projects. (C) 2017 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    A new volunteer computing model for data-intensive applications

    Get PDF
    Volunteer computing is a type of distributed computing in which ordinary people donate computing resources to scientific projects. BOINC is the main middleware system for this type of distributed computing. The aim of volunteer computing is that organizations be able to attain large computing power thanks to the participation of volunteer clients instead of a high investment in infrastructure. There are projects, like the ATLAS@Home project, in which the number of running jobs has reached a plateau, due to a high load on data servers caused by file transfer. This is why we have designed an alternative, using the same BOINC infrastructure, in order to improve the performance of BOINC projects that have reached their limit due to the I/O bottleneck in data servers. This alternative involves having a percentage of the volunteer clients running as data servers, called data volunteers, that improve the performance of the system by reducing the load on data servers. In addition, our solution takes advantage of data locality, leveraging the low network latencies of closer machines. This paper describes our alternative in detail and shows the performance of the solution, applied to 3 different BOINC projects, using a simulator of our own, ComBoS.Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD, Grant/Award Number: TIN2016-79637-

    Wepsim: an online interactive educational simulator integrating microdesign, microprogramming, and assembly language programming

    Get PDF
    Our educational project has three primary goals. First, we want to provide a robust vision of how hardware and software interplay, by integrating the design of an instruction set (through microprogramming) and using that instruction set for assembly programming. Second, we wish to offer a versatile and interactive tool where the previous integrated vision could be tested. The tool we have developed to achieve this is called WepSIM and it provides the view of an elemental processor together with a microprogrammed subset of the MIPS instruction set. In addition, WepSIM is flexible enough to be adapted to other instruction sets or hardware components (e.g., ARM or x86). Third, we want to extend the activities of our university courses, labs, and lectures (fixed hours in a fixed place), so that students may learn by using their mobile device at any location, and at any time during the day. This article presents how WepSIM has improved the teaching of Computer Structure courses by empowering students with a more dynamic and guided learning process. In this paper, we show the results obtained during the experience of using the simulator in the Computer Structure course of the Bachelor's Degree in Computer Science and Engineering at University Carlos III of Madrid

    Graph neural network for 3D classification of ambiguities and optical crosstalk in scintillator-based neutrino detectors

    Get PDF
    Deep learning tools are being used extensively in high energy physics and are becoming central in the reconstruction of neutrino interactions in particle detectors. In this work, we report on the performance of a graph neural network in assisting with particle flow event reconstruction. The three-dimensional reconstruction of particle tracks produced in neutrino interactions can be subject to ambiguities due to high multiplicity signatures in the detector or leakage of signal between neighboring active detector volumes. Graph neural networks potentially have the capability of identifying all these features to boost the reconstruction performance. As an example case study, we tested a graph neural network, inspired by the GraphSAGE algorithm, on a novel 3D-granular plastic-scintillator detector, that will be used to upgrade the near detector of the T2K experiment. The developed neural network has been trained and tested on diverse neutrino interaction samples, showing very promising results: the classification of particle track voxels produced in the detector can be done with efficiencies and purities of 94-96% per event and most of the ambiguities can be identified and rejected, while being robust against systematic effects

    Neutrino interaction event filtering at liquid argon time projection chambers using neural networks with minimal input model bias

    No full text
    In current and future neutrino oscillation experiments using liquid argon time projection chambers (LAr-TPCs), a key challenge is identifying neutrino interactions from the pervading cosmic-ray background. Rejection of such background is often possible using traditional cut-based selections, but this typically requires the prior use of computationally expensive reconstruction algorithms. This work demonstrates an alternative approach of using 3D Convolutional Neural Networks (CNNs) trained on low-level timing information from only the scintillation light signal of interactions inside LAr-TPCs. We further present a means of mitigating biases from imperfect simulations by applying Domain Adversarial Neural Networks (DANNs). These techniques are applied to example simulations from the ICARUS detector, the far detector of the Short Baseline Neutrino experiment at Fermilab. The results show that cosmic background is reduced by up to 74% whilst neutrino interaction selection efficiency remains over 94%, even in cases where the simulation poorly describes the data

    Adversarial methods to reduce simulation bias in neutrino interaction event filtering at Liquid Argon Time Projection Chambers

    No full text
    For current and future neutrino oscillation experiments using large liquid argon time projection chambers (LAr-TPCs), a key challenge is identifying neutrino interactions from the pervading cosmic-ray background. Rejection of such background is often possible using traditional cut-based selections, but this typically requires the prior use of computationally expensive reconstruction algorithms. This work demonstrates an alternative approach of using a 3D submanifold sparse convolutional network trained on low-level information from the scintillation light signal of interactions inside LAr-TPCs. This technique is applied to example simulations from ICARUS, the far detector of the short baseline neutrino program at Fermilab. The results of the network, show that cosmic background is reduced by up to 76.3% whilst neutrino interaction selection efficiency remains over 98.9%. We further present a way to mitigate potential biases from imperfect input simulations by applying domain adversarial neural networks (DANNs), for which modified simulated samples are introduced to imitate real data and a small portion of them are used for adversarial training. A series of mock-data studies are performed and demonstrate the effectiveness of using DANNs to mitigate biases, showing neutrino interaction selection efficiency performances significantly better than that achieved without the adversarial training.For current and future neutrino oscillation experiments using large Liquid Argon Time Projection Chambers (LAr-TPCs), a key challenge is identifying neutrino interactions from the pervading cosmic-ray background. Rejection of such background is often possible using traditional cut-based selections, but this typically requires the prior use of computationally expensive reconstruction algorithms. This work demonstrates an alternative approach of using a 3D Submanifold Sparse Convolutional Network trained on low-level information from the scintillation light signal of interactions inside LAr-TPCs. This technique is applied to example simulations from ICARUS, the far detector of the Short Baseline Neutrino (SBN) program at Fermilab. The results of the network, show that cosmic background is reduced by up to 76.3% whilst neutrino interaction selection efficiency remains over 98.9%. We further present a way to mitigate potential biases from imperfect input simulations by applying Domain Adversarial Neural Networks (DANNs), for which modified simulated samples are introduced to imitate real data and a small portion of them are used for adverserial training. A series of mock-data studies are performed and demonstrate the effectiveness of using DANNs to mitigate biases, showing neutrino interaction selection efficiency performances significantly better than that achieved without the adversarial training

    Datasets and model checkpoints for overlapping-sparse neutrino interaction decomposition

    No full text
    <p>Datasets and model checkpoints corresponding to the article: "Deep-learning-based decomposition of overlapping-sparse images: application at the vertex of neutrino interactions." (https://arxiv.org/abs/2310.19695):</p><ul><li><strong>dataset_p.tar.gz</strong>: proton dataset.</li><li><strong>dataset_mu.tar.gz</strong>: muon dataset.</li><li><strong>dataset_D+.tar.gz</strong>: deuterium dataset.</li><li><strong>dataset_T+.tar.gz</strong>: tritium dataset.</li><li><strong>metadata.tar.gz</strong>: datasets metadata.</li><li><strong>checkpoints.tar.gz</strong>: checkpoints for the trained neural network models.</li></ul><p> </p&gt
    corecore