46 research outputs found
A citizen-science approach to muon events in imaging atmospheric Cherenkov telescope data: the Muon Hunter
Event classification is a common task in gamma-ray astrophysics. It can be
treated with rapidly-advancing machine learning algorithms, which have the
potential to outperform traditional analysis methods. However, a major
challenge for machine learning models is extracting reliably labelled training
examples from real data. Citizen science offers a promising approach to tackle
this challenge.
We present "Muon Hunter", a citizen science project hosted on the Zooniverse
platform, where VERITAS data are classified multiple times by individual users
in order to select and parameterize muon events, a product from cosmic ray
induced showers. We use this dataset to train and validate a convolutional
neural-network model to identify muon events for use in monitoring and
calibration. The results of this work and our experience of using the
Zooniverse are presented.Comment: 8 pages, 3 figures, in Proceedings of the 35th International Cosmic
Ray Conference (ICRC 2017), Busan, South Kore
Recommended from our members
Identifying muon rings in VERITAS data using convolutional neural networks trained on images classified with Muon Hunters 2
Muons from extensive air showers appear as rings in images taken with imaging atmospheric Cherenkov telescopes, such as VERITAS. These muon-ring images are used for the calibration of the VERITAS telescopes, however the calibration accuracy can be improved with a more efficient muon-identification algorithm. Convolutional neural networks (CNNs) are used in many state-of-the-art image-recognition systems and are ideal for muon image identification, once trained on a suitable dataset with labels for muon images. However, by training a CNN on a dataset labelled by existing algorithms, the performance of the CNN would be limited by the suboptimal muon-identification efficiency of the original algorithms. Muon Hunters 2 is a citizen science project that asks users to label grids of VERITAS telescope images, stating which images contain muon rings. Each image is labelled 10 times by independent volunteers, and the votes are aggregated and used to assign a 'muon' or 'non-muon' label to the corresponding image. An analysis was performed using an expert-labelled dataset in order to determine the optimal vote percentage cut-offs for assigning labels to each image for CNN training. This was optimised so as to identify as many muon images as possible while avoiding false positives. The performance of this model greatly improves on existing muon identification algorithms, identifying approximately 30 times the number of muon images identified by the current algorithm implemented in VEGAS (VERITAS Gamma-ray Analysis Suite), and roughly 2.5 times the number identified by the Hough transform method, along with significantly outperforming a CNN trained on VEGAS-labelled data
Processing Images from Multiple IACTs in the TAIGA Experiment with Convolutional Neural Networks
Extensive air showers created by high-energy particles interacting with the
Earth atmosphere can be detected using imaging atmospheric Cherenkov telescopes
(IACTs). The IACT images can be analyzed to distinguish between the events
caused by gamma rays and by hadrons and to infer the parameters of the event
such as the energy of the primary particle. We use convolutional neural
networks (CNNs) to analyze Monte Carlo-simulated images from the telescopes of
the TAIGA experiment. The analysis includes selection of the images
corresponding to the showers caused by gamma rays and estimating the energy of
the gamma rays. We compare performance of the CNNs using images from a single
telescope and the CNNs using images from two telescopes as inputs.Comment: In Proceedings of 5th International Workshop on Deep Learning in
Computational Physics (DLCP2021), 28-29 June, 2021, Moscow, Russi