31 research outputs found
Improved Contrast Sensitivity DVS and its Application to Event-Driven Stereo Vision
This paper presents a new DVS sensor with
one order of magnitude improved contrast sensitivity over
previous reported DVSs. This sensor has been applied to a
bio-inspired event-based binocular system that performs
3D event-driven reconstruction of a scene. Events from two
DVS sensors are matched by using precise timing
information of their ocurrence. To improve matching
reliability, satisfaction of epipolar geometry constraint is
required, and simultaneously available information on the
orientation is used as an additional matching constraint.Ministerio de Economía y Competitividad PRI-PIMCHI-2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01Junta de Andalucía TIC-609
Event-Driven Stereo Visual Tracking Algorithm to Solve Object Occlusion
Object tracking is a major problem for many computer
vision applications, but it continues to be computationally
expensive. The use of bio-inspired neuromorphic event-driven
dynamic vision sensors (DVSs) has heralded new methods for
vision processing, exploiting reduced amount of data and very
precise timing resolutions. Previous studies have shown these
neural spiking sensors to be well suited to implementing singlesensor
object tracking systems, although they experience difficulties
when solving ambiguities caused by object occlusion.
DVSs have also performed well in 3-D reconstruction in which
event matching techniques are applied in stereo setups. In this
paper, we propose a new event-driven stereo object tracking
algorithm that simultaneously integrates 3-D reconstruction
and cluster tracking, introducing feedback information in both
tasks to improve their respective performances. This algorithm,
inspired by human vision, identifies objects and learns their
position and size in order to solve ambiguities. This strategy
has been validated in four different experiments where the
3-D positions of two objects were tracked in a stereo setup even
when occlusion occurred. The objects studied in the experiments
were: 1) two swinging pens, the distance between which during
movement was measured with an error of less than 0.5%;
2) a pen and a box, to confirm the correctness of the results
obtained with a more complex object; 3) two straws attached to
a fan and rotating at 6 revolutions per second, to demonstrate
the high-speed capabilities of this approach; and 4) two people
walking in a real-world environment.Ministerio de Economía y Competitividad TEC2012-37868-C04-01Ministerio de Economía y Competitividad TEC2015-63884-C2-1-PJunta de Andalucía TIC-609
Bioinspired event-driven collision avoidance algorithm based on optic flow
Milde MB, Bertrand O, Benosman R, Egelhaaf M, Chicca E. Bioinspired event-driven collision avoidance algorithm based on optic flow. In: 2015 International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP). IEEE; 2015.Any mobile agent, whether biological or robotic, needs to avoid collisions with obstacles. Insects, such as bees and flies, use optic flow to estimate the relative nearness to obstacles. Optic flow induced by ego-motion is composed of a translational and a rotational component. The segregation of both components is computationally and thus energetically expensive. Flies and bees actively separate the rotational and translational optic flow components via behaviour, i.e. by employing a saccadic strategy of flight and gaze control. Although robotic systems are able to mimic this gaze-strategy, the calculation of optic-flow fields from standard camera images remains time and energy consuming. To overcome this problem, we use a dynamic vision sensor (DVS), which provides event-based information about changes in contrast over time at each pixel location. To extract optic flow from this information, a plane-fitting algorithm estimating the relative velocity in a small spatio-temporal cuboid is used. The depthstructure is derived from the translational optic flow by using local properties of the retina. A collision avoidance direction is then computed from the event-based depth-structure of the environment. The system has successfully been tested on a robotic platform in open-loop
Event-driven stereo vision with orientation filters
The recently developed Dynamic Vision Sensors
(DVS) sense dynamic visual information asynchronously and
code it into trains of events with sub-micro second temporal
resolution. This high temporal precision makes the output of
these sensors especially suited for dynamic 3D visual
reconstruction, by matching corresponding events generated by
two different sensors in a stereo setup. This paper explores the
use of Gabor filters to extract information about the orientation
of the object edges that produce the events, applying the
matching algorithm to the events generated by the Gabor filters
and not to those produced by the DVS. This strategy provides
more reliably matched pairs of events, improving the final 3D
reconstruction.European Union PRI-PIMCHI-2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01Ministerio de Economía y Competitividad TEC2012-37868-C04-01Junta de Andalucía TIC-609
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.ERANET PRI-PIMCHI- 2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01, TEC2012-37868- C04-01Junta de Andalucía TIC-609
A Fisher-Rao metric for paracatadioptric images of lines
In a central paracatadioptric imaging system a perspective camera takes an image of a scene reflected in a paraboloidal mirror. A 360° field of view is obtained, but
the image is severely distorted. In particular, straight lines in the scene project to circles in the image. These distortions make it diffcult to detect projected lines using standard image processing algorithms. The distortions are removed using a Fisher-Rao metric which is defined on the space of projected lines in the paracatadioptric image. The space of projected lines is divided into subsets such that on each subset the Fisher-Rao metric is closely approximated by the Euclidean metric. Each subset is sampled at the vertices of a square grid and values are assigned to the sampled points using an adaptation of the trace transform. The result is a set of digital images to which standard image processing algorithms can be applied.
The effectiveness of this approach to line detection is illustrated using two algorithms, both of which are based on the Sobel edge operator. The task of line detection is reduced to the task of finding isolated peaks in a Sobel image. An experimental comparison is made between these two algorithms and third algorithm taken from the literature and
based on the Hough transform
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform
International audienceEstimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks
A Motion-Based Feature for Event-Based Pattern Recognition
International audienceThis paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating " spiking " events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition
Event-Based Tone Mapping for Asynchronous Time-Based Image Sensor
International audienceThe asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time
Event-Based Color Segmentation With a High Dynamic Range Sensor
This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond). Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels