3,078 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Two-photon imaging and analysis of neural network dynamics
The glow of a starry night sky, the smell of a freshly brewed cup of coffee
or the sound of ocean waves breaking on the beach are representations of the
physical world that have been created by the dynamic interactions of thousands
of neurons in our brains. How the brain mediates perceptions, creates thoughts,
stores memories and initiates actions remains one of the most profound puzzles
in biology, if not all of science. A key to a mechanistic understanding of how
the nervous system works is the ability to analyze the dynamics of neuronal
networks in the living organism in the context of sensory stimulation and
behaviour. Dynamic brain properties have been fairly well characterized on the
microscopic level of individual neurons and on the macroscopic level of whole
brain areas largely with the help of various electrophysiological techniques.
However, our understanding of the mesoscopic level comprising local populations
of hundreds to thousands of neurons (so called 'microcircuits') remains
comparably poor. In large parts, this has been due to the technical
difficulties involved in recording from large networks of neurons with
single-cell spatial resolution and near- millisecond temporal resolution in the
brain of living animals. In recent years, two-photon microscopy has emerged as
a technique which meets many of these requirements and thus has become the
method of choice for the interrogation of local neural circuits. Here, we
review the state-of-research in the field of two-photon imaging of neuronal
populations, covering the topics of microscope technology, suitable fluorescent
indicator dyes, staining techniques, and in particular analysis techniques for
extracting relevant information from the fluorescence data. We expect that
functional analysis of neural networks using two-photon imaging will help to
decipher fundamental operational principles of neural microcircuits.Comment: 36 pages, 4 figures, accepted for publication in Reports on Progress
in Physic
Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection
Over the past decade, deep neural networks (DNNs) have demonstrated
remarkable performance in a variety of applications. As we try to solve more
advanced problems, increasing demands for computing and power resources has
become inevitable. Spiking neural networks (SNNs) have attracted widespread
interest as the third-generation of neural networks due to their event-driven
and low-powered nature. SNNs, however, are difficult to train, mainly owing to
their complex dynamics of neurons and non-differentiable spike operations.
Furthermore, their applications have been limited to relatively simple tasks
such as image classification. In this study, we investigate the performance
degradation of SNNs in a more challenging regression problem (i.e., object
detection). Through our in-depth analysis, we introduce two novel methods:
channel-wise normalization and signed neuron with imbalanced threshold, both of
which provide fast and accurate information transmission for deep SNNs.
Consequently, we present a first spiked-based object detection model, called
Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable
results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial
datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic
chip consumes approximately 280 times less energy than Tiny YOLO and converges
2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202
- …