52 research outputs found
Constraints on the optical precursor to the naked-eye burst GRB080319B from Pi of the Sky observations
I present the results of the search for an optical precursor to the naked-eye
burst - GRB080319B, which reached 5.87m optical peak luminosity in the "Pi of
the Sky" data. A burst of such a high brightness could have been preceded by an
optical precursor luminous enough to be in detection range of our experiment.
The "Pi of the Sky" cameras observed the coordinates of the GRB for about 20
minutes prior to the explosion, thus provided crucial data for the precursor
search. No signal within 3 sigma limit was found. A limit of 12m (V-band
equivalent) was set based on the data combined from two cameras, the most
robust limit to my knowledge for this precursor.Comment: Accepted for publication in Astronomy and Astrophysics on 07 February
201
The format for GRAND data storage and related Python interfaces
The vast amounts of data to be collected by the Giant Radio Array for
Neutrino Detection (GRAND) and its prototype - GRANDProto300 - require the use
of a data format very efficient in terms of i/o speed and compression. At the
same time, the data should be easily accessible, without the knowledge of the
intricacies of the format, both for bulk processing and for detailed
event-by-event analysis and reconstruction. We present the format and the
structure prepared for GRAND data, the concept of the data-processing chain,
and data-oriented and analysis-oriented interfaces written in Python.Comment: Proceedings of the 38th International Cosmic Ray Conference
(ICRC2023
A search for Elves in Mini-EUSO data using CNN-based one-class classifier
Mini-EUSO is a small, near-UV telescope observing the Earth and its
atmosphere from the International Space Station. The time resolution of 2.5
microseconds and the instantaneous ground coverage of about
km allows it to detect some Transient Luminous Events, including Elves.
Elves, with their almost circular shape and a radius expanding in time form
cone-like structures in space-time, which are usually easy to be recognised by
the eye, but not simple to filter out from the myriad of other events, many of
them not yet categorised. In this work, we present a fast and efficient
approach for detecting Elves in the data using a 3D CNN-based one-class
classifier.Comment: ICRC 2023 Proceeding
Luiza: Analysis Framework for GLORIA
The Luiza analysis framework for GLORIA is based on the Marlin package, which was originally developed for data analysis in the new High Energy Physics (HEP) project, International Linear Collider (ILC). The HEP experiments have to deal with enormous amounts of data and distributed data analysis is therefore essential. The Marlin framework concept seems to be well suited for the needs of GLORIA. The idea (and large parts of the code) taken from Marlin is that every computing task is implemented as a processor (module) that analyzes the data stored in an internal data structure, and the additional output is also added to that collection. The advantage of this modular approach is that it keeps things as simple as possible. Each step of the full analysis chain, e.g. from raw images to light curves, can be processed step-by-step, and the output of each step is still self consistent and can be fed in to the next step without any manipulation
LUIZA: ANALYSIS FRAMEWORK FOR GLORIA
Abstract. The Luiza analysis framework for GLORIA is based on the Marlin package, which was originally developed for data analysis in the new High Energy Physics (HEP) project, International Linear Collider (ILC). The HEP experiments have to deal with enormous amounts of data and distributed data analysis is therefore essential. The Marlin framework concept seems to be well suited for the needs of GLORIA. The idea (and large parts of the code) taken from Marlin is that every computing task is implemented as a processor (module) that analyzes the data stored in an internal data structure, and the additional output is also added to that collection. The advantage of this modular approach is that it keeps things as simple as possible. Each step of the full analysis chain, e.g. from raw images to light curves, can be processed step-by-step, and the output of each step is still self consistent and can be fed in to the next step without any manipulation
- …