476 research outputs found
The On-Site Analysis of the Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) observatory will be one of the largest
ground-based very high-energy gamma-ray observatories. The On-Site Analysis
will be the first CTA scientific analysis of data acquired from the array of
telescopes, in both northern and southern sites. The On-Site Analysis will have
two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and
the level-B one. The RTA performs data quality monitoring and must be able to
issue automated alerts on variable and transient astrophysical sources within
30 seconds from the last acquired Cherenkov event that contributes to the
alert, with a sensitivity not worse than the one achieved by the final pipeline
by more than a factor of 3. The Level-B Analysis has a better sensitivity (not
be worse than the final one by a factor of 2) and the results should be
available within 10 hours from the acquisition of the data: for this reason
this analysis could be performed at the end of an observation or next morning.
The latency (in particular for the RTA) and the sensitivity requirements are
challenging because of the large data rate, a few GByte/s. The remote
connection to the CTA candidate site with a rather limited network bandwidth
makes the issue of the exported data size extremely critical and prevents any
kind of processing in real-time of the data outside the site of the telescopes.
For these reasons the analysis will be performed on-site with infrastructures
co-located with the telescopes, with limited electrical power availability and
with a reduced possibility of human intervention. This means, for example, that
the on-site hardware infrastructure should have low-power consumption. A
substantial effort towards the optimization of high-throughput computing
service is envisioned to provide hardware and software solutions with
high-throughput, low-power consumption at a low-cost.Comment: In Proceedings of the 34th International Cosmic Ray Conference
(ICRC2015), The Hague, The Netherlands. All CTA contributions at
arXiv:1508.0589
Neural Network Methods for Radiation Detectors and Imaging
Recent advances in image data processing through machine learning and
especially deep neural networks (DNNs) allow for new optimization and
performance-enhancement schemes for radiation detectors and imaging hardware
through data-endowed artificial intelligence. We give an overview of data
generation at photon sources, deep learning-based methods for image processing
tasks, and hardware solutions for deep learning acceleration. Most existing
deep learning approaches are trained offline, typically using large amounts of
computational resources. However, once trained, DNNs can achieve fast inference
speeds and can be deployed to edge devices. A new trend is edge computing with
less energy consumption (hundreds of watts or less) and real-time analysis
potential. While popularly used for edge computing, electronic-based hardware
accelerators ranging from general purpose processors such as central processing
units (CPUs) to application-specific integrated circuits (ASICs) are constantly
reaching performance limits in latency, energy consumption, and other physical
constraints. These limits give rise to next-generation analog neuromorhpic
hardware platforms, such as optical neural networks (ONNs), for high parallel,
low latency, and low energy computing to boost deep learning acceleration
Real-time people tracking in a camera network
Visual tracking is a fundamental key to the recognition and analysis of human behaviour.
In this thesis we present an approach to track several subjects using multiple
cameras in real time. The tracking framework employs a numerical Bayesian estimator,
also known as a particle lter, which has been developed for parallel implementation on
a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single
tracking unit we represent the human body by a parametric ellipsoid in a 3D world.
The elliptical boundary can be projected rapidly, several hundred times per subject per
frame, onto any image for comparison with the image data within a likelihood model.
Adding variables to encode visibility and persistence into the state vector, we tackle the
problems of distraction and short-period occlusion. However, subjects may also disappear
for longer periods due to blind spots between cameras elds of view. To recognise
a desired subject after such a long-period, we add coloured texture to the ellipsoid surface,
which is learnt and retained during the tracking process. This texture signature
improves the recall rate from 60% to 70-80% when compared to state only data association.
Compared to a standard Central Processing Unit (CPU) implementation, there
is a signi cant speed-up ratio
Towards quantum 3d imaging devices
We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum–position entanglement and photon–number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, along with dramatically enhanced performances, unattainable in standard plenoptic cameras: diffraction-limited resolution, large depth of focus, and ultra-low noise. To further increase the volumetric resolution beyond the Rayleigh diffraction limit, and achieve the quantum limit, we are also developing dedicated protocols based on quantum Fisher information. However, for the quantum advantages of the proposed devices to be effective and appealing to end-users, two main challenges need to be tackled. First, due to the large number of frames required for correlation measurements to provide an acceptable signal-to-noise ratio, quantum plenoptic imaging (QPI) would require, if implemented with commercially available high-resolution cameras, acquisition times ranging from tens of seconds to a few minutes. Second, the elaboration of this large amount of data, in order to retrieve 3D images or refocusing 2D images, requires high-performance and time-consuming computation. To address these challenges, we are developing high-resolution single-photon avalanche photodiode (SPAD) arrays and high-performance low-level programming of ultra-fast electronics, combined with compressive sensing and quantum tomography algorithms, with the aim to reduce both the acquisition and the elaboration time by two orders of magnitude. Routes toward exploitation of the QPI devices will also be discussed
- …