8,982 research outputs found
Image-based deep learning for classification of noise transients in gravitational wave detectors
The detection of gravitational waves has inaugurated the era of gravitational
astronomy and opened new avenues for the multimessenger study of cosmic
sources. Thanks to their sensitivity, the Advanced LIGO and Advanced Virgo
interferometers will probe a much larger volume of space and expand the
capability of discovering new gravitational wave emitters. The characterization
of these detectors is a primary task in order to recognize the main sources of
noise and optimize the sensitivity of interferometers. Glitches are transient
noise events that can impact the data quality of the interferometers and their
classification is an important task for detector characterization. Deep
learning techniques are a promising tool for the recognition and classification
of glitches. We present a classification pipeline that exploits convolutional
neural networks to classify glitches starting from their time-frequency
evolution represented as images. We evaluated the classification accuracy on
simulated glitches, showing that the proposed algorithm can automatically
classify glitches on very fast timescales and with high accuracy, thus
providing a promising tool for online detector characterization.Comment: 25 pages, 8 figures, accepted for publication in Classical and
Quantum Gravit
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
LSST: Comprehensive NEO Detection, Characterization, and Orbits
(Abridged) The Large Synoptic Survey Telescope (LSST) is currently by far the
most ambitious proposed ground-based optical survey. Solar System mapping is
one of the four key scientific design drivers, with emphasis on efficient
Near-Earth Object (NEO) and Potentially Hazardous Asteroid (PHA) detection,
orbit determination, and characterization. In a continuous observing campaign
of pairs of 15 second exposures of its 3,200 megapixel camera, LSST will cover
the entire available sky every three nights in two photometric bands to a depth
of V=25 per visit (two exposures), with exquisitely accurate astrometry and
photometry. Over the proposed survey lifetime of 10 years, each sky location
would be visited about 1000 times. The baseline design satisfies strong
constraints on the cadence of observations mandated by PHAs such as closely
spaced pairs of observations to link different detections and short exposures
to avoid trailing losses. Equally important, due to frequent repeat visits LSST
will effectively provide its own follow-up to derive orbits for detected moving
objects. Detailed modeling of LSST operations, incorporating real historical
weather and seeing data from LSST site at Cerro Pachon, shows that LSST using
its baseline design cadence could find 90% of the PHAs with diameters larger
than 250 m, and 75% of those greater than 140 m within ten years. However, by
optimizing sky coverage, the ongoing simulations suggest that the LSST system,
with its first light in 2013, can reach the Congressional mandate of cataloging
90% of PHAs larger than 140m by 2020.Comment: 10 pages, color figures, presented at IAU Symposium 23
GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA
This work presents an updated and extended guide on methods of a proper
acceleration of the Monte Carlo integration of stochastic differential
equations with the commonly available NVIDIA Graphics Processing Units using
the CUDA programming environment. We outline the general aspects of the
scientific computing on graphics cards and demonstrate them with two models of
a well known phenomenon of the noise induced transport of Brownian motors in
periodic structures. As a source of fluctuations in the considered systems we
selected the three most commonly occurring noises: the Gaussian white noise,
the white Poissonian noise and the dichotomous process also known as a random
telegraph signal. The detailed discussion on various aspects of the applied
numerical schemes is also presented. The measured speedup can be of the
astonishing order of about 3000 when compared to a typical CPU. This number
significantly expands the range of problems solvable by use of stochastic
simulations, allowing even an interactive research in some cases.Comment: 21 pages, 5 figures; Comput. Phys. Commun., accepted, 201
Photonic Delay Systems as Machine Learning Implementations
Nonlinear photonic delay systems present interesting implementation platforms
for machine learning models. They can be extremely fast, offer great degrees of
parallelism and potentially consume far less power than digital processors. So
far they have been successfully employed for signal processing using the
Reservoir Computing paradigm. In this paper we show that their range of
applicability can be greatly extended if we use gradient descent with
backpropagation through time on a model of the system to optimize the input
encoding of such systems. We perform physical experiments that demonstrate that
the obtained input encodings work well in reality, and we show that optimized
systems perform significantly better than the common Reservoir Computing
approach. The results presented here demonstrate that common gradient descent
techniques from machine learning may well be applicable on physical
neuro-inspired analog computers
- âŠ