8 research outputs found

    FACT - Performance of the first cherenkov telescope observing with SiPMs

    Get PDF
    The First G-APD Cherenkov Telescope (FACT) is pioneering the usage of silicon photo multipliers (SIPMs also known as G-APDs) for the imaging atmospheric Cherenkov technique. It is located at the Observatorio Roque de los Muchachos on the Canary island of La Palma. Since first light in October 2011, it is monitoring bright TeV blazars in the northern sky. By now, FACT is the only imaging atmospheric Cherenkov telescope operating with SIPMs on a nightly basis. Over the course of the last five years, FACT has been demonstrating their reliability and excellent performance. Moreover, their robustness allowed for an increase of the duty cycle including nights with strong moon light without the need for UV-filters. In this contribution, we will present the performance of the first Cherenkov telescope using solid state photo sensors, which was determined in analysis of data from Crab Nebula, the so called standard candle in gamma-ray astronomy. The presented analysis chain utilizes modern data mining methods and unfolding techniques to obtain the energy spectrum of this source. The characteristical results of such an analysis will be reported providing, e.g., the angular and energy resolution of FACT, as well as, the energy spectrum of the Crab Nebula. Furthermore, these results are discussed in the context of the performance of coexisting Cherenkov telescopes.M. Noethe, J. Adam, M.L. Ahnen, D. Baack, M. Balbo, A. Biland, M. Blank, T. Bretz, K. Bruegge, J. Buss, A. Dmytriiev, D. Dorner, S. Einecke, D. Elsaesser, C. Hempfling, T. Herbst, D. Hildebrand, L. Kortmann, L. Linhoff, M. Mahlke, K. Mannheim, S. Mueller, D. Neise, A. Neronov, J. Oberkirch, A. Paravac, F. Pauss, W. Rhode, B. Schleicher, F. Schulz, A. Shukla, V. Sliusar, F. Temme, J. Thaele, R. Walte

    Long-term monitoring of bright blazars in the multi-GeV to TeV range with FACT

    Get PDF
    Blazars like Markarian 421 or Markarian 501 are active galactic nuclei (AGN), with their jets orientated towards the observer. They are among the brightest objects in the very high energy (VHE) gamma ray regime (>100 GeV). Their emitted gamma-ray fluxes are extremely variable, with changing activity levels on timescales between minutes, months, and even years. Several questions are part of the current research, such as the question of the emission regions or the engine of the AGN and the particle acceleration. A dedicated longterm monitoring program is necessary to investigate the properties of blazars in detail. A densely sampled and unbiased light curve allows for observation of both high and low states of the sources, and the combination with multi-wavelength observation could contribute to the answer of several questions mentioned above. FACT (First G-APD Cherenkov Telescope) is the first operational telescope using silicon photomultiplier (SiPM, also known as Geigermode—Avalanche Photo Diode, G-APD) as photon detectors. SiPM have a very homogenous and stable longterm performance, and allow operation even during full moon without any filter, leading to a maximal duty cycle for an Imaging Air Cherenkov Telescope (IACT). Hence, FACT is an ideal device for such a longterm monitoring of bright blazars. A small set of sources (e.g., Markarian 421, Markarian 501, 1ES 1959+650, and 1ES 2344+51.4) is currently being monitored. In this contribution, the FACT telescope and the concept of longterm monitoring of bright blazars will be introduced. The results of the monitoring program will be shown, and the advantages of densely sampled and unbiased light curves will be discussed

    Monitoring the high energy universe

    Get PDF
    High energy gamma-ray astronomy probes the most extreme phenomena in our universe: super novae and their remnants as well as supermassive black holes at the center of far away galaxies. The First G-APD Cherenkov Telescope (FACT) is a small, prototype Imaging Air Cherenkov Telescope (IACT) operating since 2011 at the Roque de los Muchachos, La Palma, Spain. It specializes in continuously monitoring the brightest known sources of gamma rays. In this thesis, I present a new, open analysis chain for the data recorded by FACT, with a major focus on ensuring reproducibility and relying on modern, well-tested tools with widespread adoption. The integral sensitivity of FACT was improved by 45 % compared to previous analyses by the introduction of an improved algorithm for the reconstruction of origin of the gamma rays and many smaller improvements in the preprocessing. Sensitivity is evaluated both on simulated datasets as well as observations of the Crab Nebula, the “standard candle” of gamma-ray astronomy. Another major advantage of this new analysis chain is the elimination of the dependence on a known point source position from the event reconstruction, thus enabling the creation of skymaps, the analysis of observations where the source position is not exactly known and sharing reconstructed events in the now standardized format for open gamma-ray astronomy. This has lead to the first publication of a joined, multi-instrument analysis on open data of four currently operating Cherenkov telescopes. A smaller second part of this thesis is concerned with enabling robotic operation of FACT, which is now the first Cherenkov telescope, where no operators are required during regular observations.Die Hochenergie-Gammaastronomie erlaubt es, die extremsten PhĂ€nomene in unserem Universum zu untersuchen: Supernovae und ihre Überreste sowie supermassive schwarze Löcher in den Zentren weit entfernter Galaxien. Das First G-APD Cherenkov Telescope (FACT) ist ein kleines, bildgebendes, atmosphĂ€risches Tscherenkow Teleskop, dass seit Oktober 2011 auf dem Roque de los Muchachos, La Palma, Spanien beobachtet. Es ist auf die Langzeitbeobachtung der hellsten bekannten Gammastrahlungsquellen spezialisiert. In dieser Arbeit stelle ich eine neue, öffentliche Analysekette fĂŒr die von FACT aufgenommen Daten vor. Ein Hauptaugenmerk wurde auf die Reproduzierbarkeit und die Verwendung moderner, gut getesteter und weit verbreiteter Methoden gelegt. Die integrale SensitivitĂ€t von FACT wurde im Vergleich zu frĂŒheren Analysen um 45 % gesteigert, hauptsĂ€chlich durch die EinfĂŒhrung einer verbesserten Methode zur Bestimmung der Herkunft der Gammastrahlung, sowie durch viele weitere, kleinere Verbesserungen in der Vorverarbeitung. Die SensitivitĂ€t wurde sowohl auf simulierten Daten als auch auf Beobachtungen des Krebsnebels, der Standardkerze der Gammaastronomie, ausgewertet. Ein weiterer Vorteil der neuentwickelten Analysekette ist ihre UnabhĂ€ngigkeit von Annahmen ĂŒber eine bekannte Punktquelle. Dies ermöglicht die Erstellung von Himmelskarten, die Analyse von Beobachtungen, bei denen die Quellposition nicht genau bekannt ist und das Speichern und Veröffentlichen rekonstruierter Ereignisse im nun standardisiertem Datenformat fĂŒr offene Gammaastronomie. Die hat die Publikation der ersten gemeinsamen Analyse von Krebsnebel-Daten von vier aktuell beobachtenden Tscherenkow-Teleskopen ermöglicht. Der zweite, kleinere Teil dieser Arbeit beschĂ€ftigt sich mit der Robotisierung von FACT, welches nun das erste Tscherenkow-Teleskop ist, fĂŒr dessen regulĂ€re Observationen kein Personal mehr benötigt wird

    Machine learning for acquiring knowledge in astro-particle physics

    Get PDF
    This thesis explores the fundamental aspects of machine learning, which are involved with acquiring knowledge in the research field of astro-particle physics. This research field substantially relies on machine learning methods, which reconstruct the properties of astro-particles from the raw data that specialized telescopes record. These methods are typically trained from resource-intensive simulations, which reflect the existing knowledge about the particles—knowledge that physicists strive to expand. We study three fundamental machine learning tasks, which emerge from this goal. First, we address ordinal quantification, the task of estimating the prevalences of ordered classes in sets of unlabeled data. This task emerges from the need for testing the agreement of astro-physical theories with the class prevalences that a telescope observes. To this end, we unify existing methods on quantification, propose an alternative optimization process, and develop regularization techniques to address ordinality in quantification problems, both in and outside of astro-particle physics. These advancements provide more accurate reconstructions of the energy spectra of cosmic gamma ray sources and, hence, support physicists in drawing conclusions from their telescope data. Second, we address learning under class-conditional label noise. More particularly, we focus on a novel setting, in which one of the class-wise noise rates is known and one is not. This setting emerges from a data acquisition protocol, through which astro-particle telescopes simultaneously observe a region of interest and several background regions. We enable learning under this type of label noise with algorithms for consistent, noise-aware decision thresholding. These algorithms yield binary classifiers, which outperform the existing state-of-the-art in gamma hadron classification with the FACT telescope. Moreover, unlike the state-of-the-art, our classifiers are entirely trained from the real telescope data and thus do not require any resource-intensive simulation. Third, we address active class selection, the task of actively finding those proportions of classes which optimize the classification performance. In astro-particle physics, this task emerges from the simulation, which produces training data in any desired class proportions. We clarify the implications of this setting from two theoretical perspectives, one of which provides us with bounds of the resulting classification performance. We employ these bounds in a certificate of model robustness, which declares a set of class proportions for which the model is accurate with a high probability. We also employ these bounds in an active strategy for class-conditional data acquisition. Our strategy uniquely considers existing uncertainties about those class proportions that have to be handled during the deployment of the classifier, while being theoretically well-justified

    On the hunt for photons: analysis of Crab Nebula data obtained by the first G-APD Cherenkov telescope

    Get PDF
    In this thesis the analysis of data of the Crab Nebula obtained by the First G-APD Cherenkov Telescope (FACT) is presented. An analysis chain using modern machine learning methods for energy estimation and background suppression is developed. The generation and application of the machine learning models are validated using Monte Carlo simulated events. The Crab Nebula can be detected as a source of very high energy gamma rays with a significance of 39.89 σ. Its energy spectrum can be reconstructed between 250 GeV and 16 TeV. The results of the analysis are used to evaluate the performance of the telescope. The energy bias and energy resolution are evaluated as well as the effective collection area and the sensitivity. The positive energy bias for energies below 1 TeV can be corrected by applying an unfolding method. For higher energies the bias is negligible. The energy resolution is about 22 % for most of the energy range. The effective collection area is monotonously increasing, reaching about 3 × 10 4 m 2 around 1 TeV. A sensitivity of 15.5 % of the flux of the Crab Nebula is calculated. The performance values are comparable to the values of current experiments. Taking into account the small reflector surface of FACT, the performance is very promising.In dieser Dissertation wird die Analyse von Daten des Krebsnebels, aufgenommen durch das First G-APD Cherenkov Telescope (FACT), prĂ€sentiert. Zu diesen Zweck wird eine Analysekette entwickelt, die moderne Machine-Learning-Methoden zur EnergieabschĂ€tzung und HintergrundunterdrĂŒckung verwendet. Mit Hilfe von Monte-Carlo-simulierten Ereignissen wird die Generierung und Anwendung der Machine-Learning-Modelle validiert. Der Krebsnebel kann mit einer Signifikanz von 39.89 σ detektiert werden. Sein Energiespektrum kann zwischen 250 GeV und 16 TeV rekonstruiert werden. Die Ergebnisse der Analyse werden verwendet, um die Performanz des Teleskopes zu evaluieren. Untersucht werden der Bias und die Auflösung der Energie, sowie die effektive FlĂ€che und die SensitivitĂ€t. Der positive Bias fĂŒr Energien unter 1 TeV kann mit Hilfe einer Entfaltungsmethode korrigiert werden. FĂŒr höhere Energien ist er vernachlĂ€ssigbar. Die Energieauflösung betrĂ€gt ungefĂ€hr 22 % fĂŒr den grĂ¶ĂŸten Teil des Energie Bereiches. Die effektive FlĂ€che steigt monoton an und erreicht einen Wert von ungefĂ€hr 3 × 10 4 m 2 bei 1 TeV. Die SensitivitĂ€t betrĂ€gt 15.5 % des Flusses des Krebsnebels. Die Performanzwerte sind vergleichbar mit den Werten aktueller Experimente. In Anbetracht der kleinen ReflektoroberflĂ€che von FACT ist die Performanz sehr vielversprechend

    Discovery in Physics

    Get PDF
    Volume 2 covers knowledge discovery in particle and astroparticle physics. Instruments gather petabytes of data and machine learning is used to process the vast amounts of data and to detect relevant examples efficiently. The physical knowledge is encoded in simulations used to train the machine learning models. The interpretation of the learned models serves to expand the physical knowledge resulting in a cycle of theory enhancement

    Distributed analysis of vertically partitioned sensor measurements under communication constraints

    Get PDF
    Nowadays, large amounts of data are automatically generated by devices and sensors. They measure, for instance, parameters of production processes, environmental conditions of transported goods, energy consumption of smart homes, traffic volume, air pollution and water consumption, or pulse and blood pressure of individuals. The collection and transmission of data is enabled by electronics, software, sensors and network connectivity embedded into physical objects. The objects and infrastructure connecting such objects are called the Internet of Things (IoT). In 2010, already 12.5 billion devices were connected to the IoT, a number about twice as large as the world's population at that time. The IoT provides us with data about our physical environment, at a level of detail never known before in human history. Understanding such data creates opportunities to improve our way of living, learning, working, and entertaining. For instance, the information obtained from data analysis modules embedded into existing processes could help their optimization, leading to more sustainable systems which save resources in sectors such as manufacturing, logistics, energy and utilities, the public sector, or healthcare. IoT's inherent distributed nature, the resource constraints and dynamism of its networked participants, as well as the amounts and diverse types of data collected are challenging even the most advanced automated data analysis methods known today. Currently, there is a strong research focus on the centralization of all data in the cloud, processing it according to the paradigm of parallel high-performance computing. However, the resources of devices and sensors at the data generating side might not suffice to transmit all data. For instance, pervasive distributed systems such as wireless sensors networks are highly communication-constrained, as are streaming high throughput applications, or those where data masses are simply too huge to be sent over existing communication lines, like satellite connections. Hence, the IoT requires a new generation of distributed algorithms which are resource-aware and intelligently reduce the amount of data transmitted and processed throughout the analysis chain. This thesis deals with the distributed analysis of vertically partitioned sensor measurements under communication constraints, which is a particularly challenging scenario. Here, not observations are distributed over nodes, but their feature values. The learning of accurate prediction models may require the combination of information from different nodes, necessarily leading to communication. The main question is how to design communication-efficient algorithms for the scenario, while at the same time preserving sufficient accuracy. The first part of the thesis introduces fundamental concepts. An overview of the IoT and its many applications is given, with a special focus on data analysis, the vertically partitioned data scenario, and accompanying research questions. Then, basic notions of machine learning and data mining are introduced. A selection of existing distributed data mining approaches is presented and discussed in more detail. Distributed learning in the vertically partitioned data scenario is then motivated by a smart manufacturing case study. In a hot rolling mill, different machines assess parameters describing the processing of single steel blocks, whose quality should be predicted as early as possible, by analysis of distributed measurements. Each machine creates not single value series, but many of them. Their heterogeneity leads to challenging questions concerning the steps of preprocessing and finding a good representation for learning, for which solutions are proposed. Another problem is that quality information is not given for individual blocks, but charges of blocks. How can we nevertheless predict the quality of individual blocks? Time constraints lead to questions typical for the vertically partitioned data scenario. Which data should be analyzed locally, to match the constraints, and which should be sent to a central server? Learning from aggregated label information is a relatively novel problem in machine learning research. A new algorithm for the task is developed and evaluated, the Learning from Label Proportions by Clustering (LLPC) algorithm. The algorithm's performance is compared to three other state-of-the-art approaches, in terms of accuracy and running time. It can be shown that LLPC achieves results with lower running time, while accuracy is comparable to that of its competitors, or significantly higher. The proposed algorithm comes with many other benefits, like ease of implementation and a small memory footprint. For highly decentralized systems, the Training of Local Models from (Label) Counts (TLMC) algorithm is proposed. The method builds on LLPC, reducing communication by transferring only label counts for batches of observations between nodes. Feasibility of the approach is demonstrated by evaluating the algorithm's performance in the context of traffic flow prediction. It is shown that TLMC is much more communication-efficient than centralization of all data, but that accuracy can nevertheless compete with that of a centrally trained global model. Finally, a communication-efficient distributed algorithm for anomaly detection is proposed, the Vertically Distributed Core Vector Machine (VDCVM). It can be shown that the proposed algorithm communicates up to an order of magnitude less data during learning, in comparison to another state-of-the-art approach, or training a global model by the centralization of all data. Nevertheless, in many relevant cases, the VDCVM achieves similar or even higher accuracy on several controlled and benchmark datasets. A main result of the thesis is that communication-efficient learning is possible in cases where features from different nodes are conditionally independent, given the target value to be predicted. Most efficient are local models, which exchange label information between nodes. In comparison to consensus algorithms, which transmit labels repeatedly, TLMC sends labels only once between nodes. Communication could be even reduced further by learning from counts of labels. In the context of traffic flow prediction, the accuracy achieved is still sufficient in comparison to centralizing all data and training a global model. In the case of anomaly detection, similar results could be achieved by utilizing a sampling approach which draws only as many observations as needed to reach a (1+Δ)-approximation of the minimum enclosing ball (MEB). The developed approaches have many applications in communication-constrained settings, in the sectors mentioned above. It has been shown that data can be reduced and learned from before it even enters the cloud. Decentralized processing might thus enable the analysis of big data masses, the more devices are getting connected to the IoT
    corecore