27 research outputs found

    Modern middleware for the data acquisition of the Cherenkov Telescope Array

    Full text link
    The data acquisition system (DAQ) of the future Cherenkov Telescope Array (CTA) must be ef- ficient, modular and robust to be able to cope with the very large data rate of up to 550 Gbps coming from many telescopes with different characteristics. The use of modern middleware, namely ZeroMQ and Protocol Buffers, can help to achieve these goals while keeping the development effort to a reasonable level. Protocol Buffers are used as an on-line data for- mat, while ZeroMQ is employed to communicate between processes. The DAQ will be controlled and monitored by the Alma Common Software (ACS). Protocol Buffers from Google are a way to define high-level data structures through an in- terface description language (IDL) and a meta-compiler. ZeroMQ is a middleware that augments the capabilities of TCP/IP sockets. It does not implement very high-level features like those found in CORBA for example, but makes use of sockets easier, more robust and almost as effective as raw TCP. The use of these two middlewares enabled us to rapidly develop a robust prototype of the DAQ including data persistence to compressed FITS files.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    The On-Site Analysis of the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) observatory will be one of the largest ground-based very high-energy gamma-ray observatories. The On-Site Analysis will be the first CTA scientific analysis of data acquired from the array of telescopes, in both northern and southern sites. The On-Site Analysis will have two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and the level-B one. The RTA performs data quality monitoring and must be able to issue automated alerts on variable and transient astrophysical sources within 30 seconds from the last acquired Cherenkov event that contributes to the alert, with a sensitivity not worse than the one achieved by the final pipeline by more than a factor of 3. The Level-B Analysis has a better sensitivity (not be worse than the final one by a factor of 2) and the results should be available within 10 hours from the acquisition of the data: for this reason this analysis could be performed at the end of an observation or next morning. The latency (in particular for the RTA) and the sensitivity requirements are challenging because of the large data rate, a few GByte/s. The remote connection to the CTA candidate site with a rather limited network bandwidth makes the issue of the exported data size extremely critical and prevents any kind of processing in real-time of the data outside the site of the telescopes. For these reasons the analysis will be performed on-site with infrastructures co-located with the telescopes, with limited electrical power availability and with a reduced possibility of human intervention. This means, for example, that the on-site hardware infrastructure should have low-power consumption. A substantial effort towards the optimization of high-throughput computing service is envisioned to provide hardware and software solutions with high-throughput, low-power consumption at a low-cost.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    A prototype for the real-time analysis of the Cherenkov Telescope Array

    Full text link
    The Cherenkov Telescope Array (CTA) observatory will be one of the biggest ground-based very-high-energy (VHE) γ- ray observatory. CTA will achieve a factor of 10 improvement in sensitivity from some tens of GeV to beyond 100 TeV with respect to existing telescopes. The CTA observatory will be capable of issuing alerts on variable and transient sources to maximize the scientific return. To capture these phenomena during their evolution and for effective communication to the astrophysical community, speed is crucial. This requires a system with a reliable automated trigger that can issue alerts immediately upon detection of γ-ray flares. This will be accomplished by means of a Real-Time Analysis (RTA) pipeline, a key system of the CTA observatory. The latency and sensitivity requirements of the alarm system impose a challenge because of the anticipated large data rate, between 0.5 and 8 GB/s. As a consequence, substantial efforts toward the optimization of highthroughput computing service are envisioned. For these reasons our working group has started the development of a prototype of the Real-Time Analysis pipeline. The main goals of this prototype are to test: (i) a set of frameworks and design patterns useful for the inter-process communication between software processes running on memory; (ii) the sustainability of the foreseen CTA data rate in terms of data throughput with different hardware (e.g. accelerators) and software configurations, (iii) the reuse of nonreal- time algorithms or how much we need to simplify algorithms to be compliant with CTA requirements, (iv) interface issues between the different CTA systems. In this work we focus on goals (i) and (ii)

    Status and plans for the Array Control and Data Acquisition System of the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in the northern, and the other in the southern hemisphere, containing tens of telescopes of different sizes. The CTA performance requirements and the inherent complexity associated with the operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in the field of the gamma-ray astronomy. The ACTL (array control and data acquisition) system will consist of the hardware and software that is necessary to control and monitor the CTA arrays, as well as to time-stamp, read-out, filter and store -at aggregated rates of few GB/s- the scientific data. The ACTL system must be flexible enough to permit the simultaneous automatic operation of multiple sub-arrays of telescopes with a minimum personnel effort on site. One of the challenges of the system is to provide a reliable integration of the control of a large and heterogeneous set of devices. Moreover, the system is required to be ready to adapt the observation schedule, on timescales of a few tens of seconds, to account for changing environmental conditions or to prioritize incoming scientific alerts from time-critical transient phenomena such as gamma ray bursts. This contribution provides a summary of the main design choices and plans for building the ACTL system

    FAIR high level data for Cherenkov astronomy

    No full text
    International audienceWe highlight here several solutions developed to make high-level Cherenkov data FAIR: Findable, Accessible, Interoperable and Reusable. The first three FAIR principles may be ensured by properly indexing the data and using community standards, protocols and services, for example provided by the International Virtual Observatory Alliance (IVOA). However, the reusability principle is particularly subtle as the question of trust is raised. Provenance information, that describes the data origin and all transformations performed, is essential to ensure this trust, and it should come with the proper granularity and level of details. We developed a prototype platform to make the first H.E.S.S. public test data findable and accessible through the Virtual Observatory (VO). The exposed high-level data follows the gamma-ray astronomy data format (GADF) proposed as a community standard to ensure wider interoperability. We also designed a provenance management system in connection with the development of pipelines and analysis tools for CTA (ctapipe and gammapy), in order to collect rich and detailed provenance information, as recommended by the FAIR reusability principle. The prototype platform thus implements the main functionalities of a science gateway, including data search and access, online processing, and traceability of the various actions performed by a user

    FAIR high level data for Cherenkov astronomy

    No full text
    International audienceWe highlight here several solutions developed to make high-level Cherenkov data FAIR: Findable, Accessible, Interoperable and Reusable. The first three FAIR principles may be ensured by properly indexing the data and using community standards, protocols and services, for example provided by the International Virtual Observatory Alliance (IVOA). However, the reusability principle is particularly subtle as the question of trust is raised. Provenance information, that describes the data origin and all transformations performed, is essential to ensure this trust, and it should come with the proper granularity and level of details. We developed a prototype platform to make the first H.E.S.S. public test data findable and accessible through the Virtual Observatory (VO). The exposed high-level data follows the gamma-ray astronomy data format (GADF) proposed as a community standard to ensure wider interoperability. We also designed a provenance management system in connection with the development of pipelines and analysis tools for CTA (ctapipe and gammapy), in order to collect rich and detailed provenance information, as recommended by the FAIR reusability principle. The prototype platform thus implements the main functionalities of a science gateway, including data search and access, online processing, and traceability of the various actions performed by a user

    FAIR high level data for Cherenkov astronomy

    No full text
    International audienceWe highlight here several solutions developed to make high-level Cherenkov data FAIR: Findable, Accessible, Interoperable and Reusable. The first three FAIR principles may be ensured by properly indexing the data and using community standards, protocols and services, for example provided by the International Virtual Observatory Alliance (IVOA). However, the reusability principle is particularly subtle as the question of trust is raised. Provenance information, that describes the data origin and all transformations performed, is essential to ensure this trust, and it should come with the proper granularity and level of details. We developed a prototype platform to make the first H.E.S.S. public test data findable and accessible through the Virtual Observatory (VO). The exposed high-level data follows the gamma-ray astronomy data format (GADF) proposed as a community standard to ensure wider interoperability. We also designed a provenance management system in connection with the development of pipelines and analysis tools for CTA (ctapipe and gammapy), in order to collect rich and detailed provenance information, as recommended by the FAIR reusability principle. The prototype platform thus implements the main functionalities of a science gateway, including data search and access, online processing, and traceability of the various actions performed by a user

    FAIR high level data for Cherenkov astronomy

    No full text
    International audienceWe highlight here several solutions developed to make high-level Cherenkov data FAIR: Findable, Accessible, Interoperable and Reusable. The first three FAIR principles may be ensured by properly indexing the data and using community standards, protocols and services, for example provided by the International Virtual Observatory Alliance (IVOA). However, the reusability principle is particularly subtle as the question of trust is raised. Provenance information, that describes the data origin and all transformations performed, is essential to ensure this trust, and it should come with the proper granularity and level of details. We developed a prototype platform to make the first H.E.S.S. public test data findable and accessible through the Virtual Observatory (VO). The exposed high-level data follows the gamma-ray astronomy data format (GADF) proposed as a community standard to ensure wider interoperability. We also designed a provenance management system in connection with the development of pipelines and analysis tools for CTA (ctapipe and gammapy), in order to collect rich and detailed provenance information, as recommended by the FAIR reusability principle. The prototype platform thus implements the main functionalities of a science gateway, including data search and access, online processing, and traceability of the various actions performed by a user

    Application of Complex Event Processing Softwareto Error Detection and Recovery for Arrays of Cherenkov Telescopes

    No full text
    Data acquisition (DAQ) and control systems for arrays of Cherenkov telescopes comprise hundreds of distributed software processes that implement the readout, control and monitoring of various hardware devices. A multitude of different error conditions (malfunctioning detectorhardware, crashing software, failures of network and computing equipment etc.) can occur and must be dealt with to ensure the speedy continuation of observations and an efficient use of dark time. Flexible, fast and configurable methods for automatic and centralized error detection and recovery are therefore highly desirable for the current generation of ground-based Cherenkovexperiments (H.E.S.S., MAGIC, VERITAS) and will be important for the Cherenkov Telescope Array (CTA), a more complex observatory with O(100) telescopes. This contribution describes a Java-based software demonstrator that was developed for the High Energy Stereoscopic System (H.E.S.S.) and uses the complex event processing engine Esper for error detection and recovery.The software demonstrator analyses streams of error messages in the time domain and aims to apply recovery procedures that reflect the knowledge of DAQ and detector experts

    Detailed spectral and morphological analysis of the shell type supernova remnant RCW 86

    No full text
    Aim. We aim for an understanding of the morphological and spectral properties of the supernova remnant RCW 86 and for insights into the production mechanism leading to the RCW 86 very high-energy γ-ray emission.Methods. We analyzed High Energy Spectroscopic System (H.E.S.S.) data that had increased sensitivity compared to the observations presented in the RCW 86 H.E.S.S. discovery publication. Studies of the morphological correlation between the 0.5–1 keV X-ray band, the 2–5 keV X-ray band, radio, and γ-ray emissions have been performed as well as broadband modeling of the spectral energy distribution with two different emission models.Results. We present the first conclusive evidence that the TeV γ-ray emission region is shell-like based on our morphological studies. The comparison with 2–5 keV X-ray data reveals a correlation with the 0.4–50 TeV γ-ray emission. The spectrum of RCW 86 is best described by a power law with an exponential cutoff at Ecut = (3.5 ± 1.2stat) TeV and a spectral index of Γ ≈ 1.6 ± 0.2. A static leptonic one-zone model adequately describes the measured spectral energy distribution of RCW 86, with the resultant total kinetic energy of the electrons above 1 GeV being equivalent to ~0.1% of the initial kinetic energy of a Type Ia supernova explosion (1051 erg). When using a hadronic model, a magnetic field of B ≈ 100 μG is needed to represent the measured data. Although this is comparable to formerly published estimates, a standard E−2 spectrum for the proton distribution cannot describe the γ-ray data. Instead, a spectral index of Γp ≈ 1.7 would be required, which implies that ∼7 × 1049/ncm−3 has been transferred into high-energy protons with the effective density ncm−3 = n/1 cm−3. This is about 10% of the kinetic energy of a typical Type Ia supernova under the assumption of a density of 1 cm−3
    corecore