81 research outputs found
Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors
Many advances have been made in the eld of computer vision. Several recent research trends
have focused on mimicking human vision by using a stereo vision system. In multi-camera systems, a
calibration process is usually implemented to improve the results accuracy. However, these systems generate
a large amount of data to be processed; therefore, a powerful computer is required and, in many cases,
this cannot be done in real time. Neuromorphic Engineering attempts to create bio-inspired systems that
mimic the information processing that takes place in the human brain. This information is encoded using
pulses (or spikes) and the generated systems are much simpler (in computational operations and resources),
which allows them to perform similar tasks with much lower power consumption, thus these processes
can be developed over specialized hardware with real-time processing. In this work, a bio-inspired stereovision
system is presented, where a calibration mechanism for this system is implemented and evaluated
using several tests. The result is a novel calibration technique for a neuromorphic stereo vision system,
implemented over specialized hardware (FPGA - Field-Programmable Gate Array), which allows obtaining
reduced latencies on hardware implementation for stand-alone systems, and working in real time.Ministerio de Economía y Competitividad TEC2016-77785-PMinisterio de Economía y Competitividad TIN2016-80644-
Improved Contrast Sensitivity DVS and its Application to Event-Driven Stereo Vision
This paper presents a new DVS sensor with
one order of magnitude improved contrast sensitivity over
previous reported DVSs. This sensor has been applied to a
bio-inspired event-based binocular system that performs
3D event-driven reconstruction of a scene. Events from two
DVS sensors are matched by using precise timing
information of their ocurrence. To improve matching
reliability, satisfaction of epipolar geometry constraint is
required, and simultaneously available information on the
orientation is used as an additional matching constraint.Ministerio de Economía y Competitividad PRI-PIMCHI-2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01Junta de Andalucía TIC-609
Stereo Matching in Address-Event-Representation (AER) Bio-Inspired Binocular Systems in a Field-Programmable Gate Array (FPGA)
In stereo-vision processing, the image-matching step is essential for results, although it
involves a very high computational cost. Moreover, the more information is processed, the more time
is spent by the matching algorithm, and the more ine cient it is. Spike-based processing is a relatively
new approach that implements processing methods by manipulating spikes one by one at the time
they are transmitted, like a human brain. The mammal nervous system can solve much more complex
problems, such as visual recognition by manipulating neuron spikes. The spike-based philosophy
for visual information processing based on the neuro-inspired address-event-representation (AER)
is currently achieving very high performance. The aim of this work was to study the viability of a
matching mechanism in stereo-vision systems, using AER codification and its implementation in
a field-programmable gate array (FPGA). Some studies have been done before in an AER system
with monitored data using a computer; however, this kind of mechanism has not been implemented
directly on hardware. To this end, an epipolar geometry basis applied to AER systems was studied
and implemented, with other restrictions, in order to achieve good results in a real-time scenario.
The results and conclusions are shown, and the viability of its implementation is proven.Ministerio de Economía y Competitividad TEC2016-77785-
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.ERANET PRI-PIMCHI- 2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01, TEC2012-37868- C04-01Junta de Andalucía TIC-609
Event-driven stereo vision with orientation filters
The recently developed Dynamic Vision Sensors
(DVS) sense dynamic visual information asynchronously and
code it into trains of events with sub-micro second temporal
resolution. This high temporal precision makes the output of
these sensors especially suited for dynamic 3D visual
reconstruction, by matching corresponding events generated by
two different sensors in a stereo setup. This paper explores the
use of Gabor filters to extract information about the orientation
of the object edges that produce the events, applying the
matching algorithm to the events generated by the Gabor filters
and not to those produced by the DVS. This strategy provides
more reliably matched pairs of events, improving the final 3D
reconstruction.European Union PRI-PIMCHI-2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01Ministerio de Economía y Competitividad TEC2012-37868-C04-01Junta de Andalucía TIC-609
- …