42 research outputs found

    Interrelation of equivariant Gaussian processes and convolutional neural networks

    Full text link
    Currently there exists rather promising new trend in machine leaning (ML) based on the relationship between neural networks (NN) and Gaussian processes (GP), including many related subtopics, e.g., signal propagation in NNs, theoretical derivation of learning curve for NNs, QFT methods in ML, etc. An important feature of convolutional neural networks (CNN) is their equivariance (consistency) with respect to the symmetry transformations of the input data. In this work we establish a relationship between the many-channel limit for CNNs equivariant with respect to two-dimensional Euclidean group with vector-valued neuron activations and the corresponding independently introduced equivariant Gaussian processes (GP).Comment: 5 pages. Presented at the ACAT 2021: 20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research, Daejeon, Kr, 29 Nov - 3 Dec 202

    Processing Images from Multiple IACTs in the TAIGA Experiment with Convolutional Neural Networks

    Full text link
    Extensive air showers created by high-energy particles interacting with the Earth atmosphere can be detected using imaging atmospheric Cherenkov telescopes (IACTs). The IACT images can be analyzed to distinguish between the events caused by gamma rays and by hadrons and to infer the parameters of the event such as the energy of the primary particle. We use convolutional neural networks (CNNs) to analyze Monte Carlo-simulated images from the telescopes of the TAIGA experiment. The analysis includes selection of the images corresponding to the showers caused by gamma rays and estimating the energy of the gamma rays. We compare performance of the CNNs using images from a single telescope and the CNNs using images from two telescopes as inputs.Comment: In Proceedings of 5th International Workshop on Deep Learning in Computational Physics (DLCP2021), 28-29 June, 2021, Moscow, Russi

    Using conditional variational autoencoders to generate images from atmospheric Cherenkov telescopes

    Full text link
    High-energy particles hitting the upper atmosphere of the Earth produce extensive air showers that can be detected from the ground level using imaging atmospheric Cherenkov telescopes. The images recorded by Cherenkov telescopes can be analyzed to separate gamma-ray events from the background hadron events. Many of the methods of analysis require simulation of massive amounts of events and the corresponding images by the Monte Carlo method. However, Monte Carlo simulation is computationally expensive. The data simulated by the Monte Carlo method can be augmented by images generated using faster machine learning methods such as generative adversarial networks or conditional variational autoencoders. We use a conditional variational autoencoder to generate images of gamma events from a Cherenkov telescope of the TAIGA experiment. The variational autoencoder is trained on a set of Monte Carlo events with the image size, or the sum of the amplitudes of the pixels, used as the conditional parameter. We used the trained variational autoencoder to generate new images with the same distribution of the conditional parameter as the size distribution of the Monte Carlo-simulated images of gamma events. The generated images are similar to the Monte Carlo images: a classifier neural network trained on gamma and proton events assigns them the average gamma score 0.984, with less than 3% of the events being assigned the gamma score below 0.999. At the same time, the sizes of the generated images do not match the conditional parameter used in their generation, with the average error 0.33

    German-Russian Astroparticle Data Life Cycle Initiative to foster Big Data Infrastructure for Multi-Messenger Astronomy

    Get PDF
    Challenges faced by researchers in multi-messenger astroparticle physics include: computing-intensive search and preprocessing related to the diversity of content and formats of the data from different observatories as well as to data fragmentation over separate storage locations; inconsistencies in user interfaces for data retrieval; lack of the united infrastructure solutions suitable for both data gathering and online analysis, e.g. analyses employing deep neural networks. In order to address solving these issues, the German-Russian Astroparticle Data Life Cycle Initiative (GRADLCI) was created. In addition, we support activities for communicating our research field to the public. The approaches proposed by the project are based on the concept of data life cycle, which assumes a particular pipeline of data curation used for every unit of the data from the moment of its retrieval or creation through the stages of data preprocessing, analysis, publishing and archival. The movement towards unified data curation schemes is essential to increase the benefits gained in the analysis of geographically distributed or content-diverse data. Within the project, an infrastructure for effective astroparticle data curation and online analysis was developed. Using it, first results on deep-learning based analysis were obtained
    corecore