50 research outputs found
Calibration of a wide angle stereoscopic system
This paper was published in OPTICS LETTERS and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://dx.doi.org/10.1364/OL.36.003064. Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under law.Inaccuracies in the calibration of a stereoscopic system appear with errors in point correspondences between both images and inexact points localization in each image. Errors increase if the stereoscopic system is composed of wide angle lens cameras. We propose a technique where detected points in both images are corrected before estimating the fundamental matrix and the lens distortion models. Since points are corrected first, errors in point correspondences and point localization are avoided. To correct point location in both images, geometrical and epipolar constraints are imposed in a nonlinear minimization problem. Geometrical constraints define the point localization in relation to its neighbors in the same image, and eipolar constraints represent the location of one point referred to its corresponding point in the other image. © 2011 Optical Society of America.Ricolfe Viala, C.; Sánchez Salmerón, AJ.; Martínez Berti, E. (2011). Calibration of a wide angle stereoscopic system. Optics Letters. 36(16):3064-3067. doi:10.1364/OL.36.003064S306430673616Zhang, Z., Ma, H., Guo, T., Zhang, S., & Chen, J. (2011). Simple, flexible calibration of phase calculation-based three-dimensional imaging system. Optics Letters, 36(7), 1257. doi:10.1364/ol.36.001257Longuet-Higgins, H. C. (1981). A computer algorithm for reconstructing a scene from two projections. Nature, 293(5828), 133-135. doi:10.1038/293133a0Ricolfe-Viala, C., & Sanchez-Salmeron, A.-J. (2010). Lens distortion models evaluation. Applied Optics, 49(30), 5914. doi:10.1364/ao.49.005914Armangué, X., & Salvi, J. (2003). Overall view regarding fundamental matrix estimation. Image and Vision Computing, 21(2), 205-220. doi:10.1016/s0262-8856(02)00154-3Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14-24. doi:10.1007/pl0001326
4-Dimensional deformation part model for pose estimation using Kalman filter constraints
The main goal of this article is to analyze the effect on pose estimation
accuracy when using a Kalman filter added to 4-dimensional deformation part
model partial solutions. The experiments run with two data sets showing that
this method improves pose estimation accuracy compared with state-of-the-art
methods and that a Kalman filter helps to increase this accuracy
Mapping the Demand Side of Computational Social Science for Policy
This report aims at collecting novel and pressing policy issues that can be addressed by Computational Social Science (CSS), an emerging discipline that is rooted in the increasing availability of digital trace data and computational resources and seeks to apply data science methods to social sciences. The questions were sourced from researchers at the European Commission who work at the interface between science and policy and who are well positioned to formulate research questions that are likely to anticipate future policy needs.
The attempt is to identify possible directions for Computational Social Science starting from the demand side, making it an effort to consider not only how science can ultimately provide policy support — “Science for Policy – but also how policymakers can be involved in the process of defining and co-creating the CSS4P agenda from the outset — ‘Policy for Science’. The report is expected to raise awareness on the latest scientific advances in Computational Social Science and on its potential for policy, integrating the knowledge of policymakers and stimulating further questions in the context of future developments of this initiativeJRC.A.5 - Scientific Developmen
Status and results of the prototype LST of CTA
The Large-Sized Telescopes (LSTs) of Cherenkov Telescope Array (CTA) are designed for gamma-ray studies focusing on low energy threshold, high flux sensitivity, rapid telescope repositioning speed and a large field of view. Once the CTA array is complete, the LSTs will be dominating the CTA performance between 20 GeV and 150 GeV. During most of the CTA Observatory construction phase, however, the LSTs will be dominating the array performance until several TeVs. In this presentation we will report on the status of the LST-1 telescope inaugurated in La Palma, Canary islands, Spain in 2018. We will show the progress of the telescope commissioning, compare the expectations with the achieved performance, and give a glance of the first physics results
First follow-up of transient events with the CTA Large Size Telescope prototype
When very-high-energy gamma rays interact high in the Earth’s atmosphere, they produce cascades of particles that induce flashes of Cherenkov light. Imaging Atmospheric Cherenkov Telescopes (IACTs) detect these flashes and convert them into shower images that can be analyzed to extract the properties of the primary gamma ray. The dominant background for IACTs is comprised of air shower images produced by cosmic hadrons, with typical noise-to-signal ratios of several orders of magnitude. The standard technique adopted to differentiate between images initiated by gamma rays and those initiated by hadrons is based on classical machine learning algorithms, such as Random Forests, that operate on a set of handcrafted parameters extracted from the images. Likewise, the inference of the energy and the arrival direction of the primary gamma ray is performed using those parameters. State-of-the-art deep learning techniques based on convolutional neural networks (CNNs) have the potential to enhance the event reconstruction performance, since they are able to autonomously extract features from raw images, exploiting the pixel-wise information washed out during the parametrization process.
Here we present the results obtained by applying deep learning techniques to the reconstruction of Monte Carlo simulated events from a single, next-generation IACT, the Large-Sized Telescope (LST) of the Cherenkov Telescope Array (CTA). We use CNNs to separate the gamma-ray-induced events from hadronic events and to reconstruct the properties of the former, comparing their performance to the standard reconstruction technique. Three independent implementations of CNN-based event reconstruction models have been utilized in this work, producing consistent results
Reconstruction of extensive air shower images of the Large Size Telescope prototype of CTA using a novel likelihood technique
Ground-based gamma-ray astronomy aims at reconstructing the energy and direction of gamma rays from the extensive air showers they initiate in the atmosphere. Imaging Atmospheric Cherenkov Telescopes (IACT) collect the Cherenkov light induced by secondary charged particles in extensive air showers (EAS), creating an image of the shower in a camera positioned in the focal plane of optical systems. This image is used to evaluate the type, energy and arrival direction of the primary particle that initiated the shower. This contribution shows the results of a novel reconstruction method based on likelihood maximization. The novelty with respect to previous likelihood reconstruction methods lies in the definition of a likelihood per single camera pixel, accounting not only for the total measured charge, but also for its development over time. This leads to more precise reconstruction of shower images. The method is applied to observations of the Crab Nebula acquired with the Large Size Telescope prototype (LST-1) deployed at the northern site of the Cherenkov Telescope Array
Development of an advanced SiPM camera for the Large Size Telescope of the Cherenkov TelescopeArray Observatory
Silicon photomultipliers (SiPMs) have become the baseline choice for cameras of the small-sized telescopes (SSTs) of the Cherenkov Telescope Array (CTA).
On the other hand, SiPMs are relatively new to the field and covering large surfaces and operating at high data rates still are challenges to outperform photomultipliers (PMTs). The higher sensitivity in the near infra-red and longer signals compared to PMTs result in higher night sky background rate for SiPMs. However, the robustness of the SiPMs represents a unique opportunity to ensure long-term operation with low maintenance and better duty cycle than PMTs. The proposed camera for large size telescopes will feature 0.05 degree pixels, low power and fast front-end electronics and a fully digital readout. In this work, we present the status of dedicated simulations and data analysis for the performance estimation. The design features and the different strategies identified, so far, to tackle the demanding requirements and the improved performance are described
Analysis of the Cherenkov Telescope Array first Large Size Telescope real data using convolutional neural networks
The Cherenkov Telescope Array (CTA) is the future ground-based gamma-ray observatory and will be composed of two arrays of imaging atmospheric Cherenkov telescopes (IACTs) located in the Northern and Southern hemispheres respectively. The first CTA prototype telescope built on-site, the Large-Sized Telescope (LST-1), is under commissioning in La Palma and has already taken data on numerous known sources. IACTs detect the faint flash of Cherenkov light indirectly produced after a very energetic gamma-ray photon has interacted with the atmosphere and generated an atmospheric shower. Reconstruction of the characteristics of the primary photons is usually done using a parameterization up to the third order of the light distribution of the images. In order to go beyond this classical method, new approaches are being developed using state-of-the-art methods based on convolutional neural networks (CNN) to reconstruct the properties of each event (incoming direction, energy and particle type) directly from the telescope images. While promising, these methods are notoriously difficult to apply to real data due to differences (such as different levels of night sky background) between Monte Carlo (MC) data used to train the network and real data. The GammaLearn project, based on these CNN approaches, has already shown an increase in sensitivity on MC simulations for LST-1 as well as a lower energy threshold. This work applies the GammaLearn network to real data acquired by LST-1 and compares the results to the classical approach that uses random forests trained on extracted image parameters. The improvements on the background rejection, event direction, and energy reconstruction are discussed in this contribution
Commissioning of the camera of the first Large Size Telescope of the Cherenkov Telescope Array
The first Large Size Telescope (LST-1) of the Cherenkov Telescope Array has been operational since October 2018 at La Palma, Spain. We report on the results obtained during the camera commissioning. The noise level of the readout is determined as a 0.2 p.e. level. The gain of PMTs are well equalized within 2% variation, using the calibration flash system. The effect of the night sky background on the signal readout noise as well as the PMT gain estimation are also well evaluated. Trigger thresholds are optimized for the lowest possible gamma-ray energy threshold and the trigger distribution synchronization has been achieved within 1 ns precision. Automatic rate control realizes the stable observation with 1.5% rate variation over 3 hours. The performance of the novel DAQ system demonstrates a less than 10% dead time for 15 kHz trigger rate even with sophisticated online data correction
Joint Observation of the Galactic Center with MAGIC and CTA-LST-1
MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes (IACTs), designed to detect very-high-energy gamma rays, and is operating in stereoscopic mode since 2009 at the Observatorio del Roque de Los Muchachos in La Palma, Spain. In 2018, the prototype IACT of the Large-Sized Telescope (LST-1) for the Cherenkov Telescope Array, a next-generation ground-based gamma-ray observatory, was inaugurated at the same site, at a distance of approximately 100 meters from the MAGIC telescopes. Using joint observations between MAGIC and LST-1, we developed a dedicated analysis pipeline and established the threefold telescope system via software, achieving the highest sensitivity in the northern hemisphere. Based on this enhanced performance, MAGIC and LST-1 have been jointly and regularly observing the Galactic Center, a region of paramount importance and complexity for IACTs. In particular, the gamma-ray emission from the dynamical center of the Milky Way is under debate. Although previous measurements suggested that a supermassive black hole Sagittarius A* plays a primary role, its radiation mechanism remains unclear, mainly due to limited angular resolution and sensitivity. The enhanced sensitivity in our novel approach is thus expected to provide new insights into the question. We here present the current status of the data analysis for the Galactic Center joint MAGIC and LST-1 observations
