125 research outputs found

    Improved Contrast Sensitivity DVS and its Application to Event-Driven Stereo Vision

    Get PDF
    This paper presents a new DVS sensor with one order of magnitude improved contrast sensitivity over previous reported DVSs. This sensor has been applied to a bio-inspired event-based binocular system that performs 3D event-driven reconstruction of a scene. Events from two DVS sensors are matched by using precise timing information of their ocurrence. To improve matching reliability, satisfaction of epipolar geometry constraint is required, and simultaneously available information on the orientation is used as an additional matching constraint.Ministerio de Economía y Competitividad PRI-PIMCHI-2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01Junta de Andalucía TIC-609

    Event-based neuromorphic stereo vision

    Full text link

    A multi-chip implementation of cortical orientation hypercolumns

    Get PDF
    This paper describes a neuromorphic implementation of the orientation hypercolumns found in the mammalian primary visual cortex. A hypercolumn contains a group of neurons that respond to the same retinal location, but with different orientation preferences. The system consists of a single silicon retina feeding multiple orientation selective chips, each of which contains neurons tuned to the same orientation, but with different receptive field centers and spatial phases. All chips operate in continuous time, and communicate with each other using spikes transmitted by the asynchronous digital Address Event Representation communication protocol. This enables us to implement recurrent interactions between neurons within one hypercolumn, even though they are located on different chips. We demonstrate this by measuring shifts in orientation selectivity due to changes in the feedback

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    An orientation selective 2D AER transceiver

    Get PDF
    This paper describes an address event representation (AER) transceiver chip that accepts 2D images and produces 2D output images equal to the input filtered by even and odd symmetric orientation selective spatial filters. Both input and output are encoded as spike trains using a differential ON/OFF representation, conserving energy and AER bandwidth. The spatial filtering is performed by symmetric analog circuits that operate on input currents obtained by integrating the input spike trains, and which preserve the ON/OFF representation. This chip is a key component of a multi-chip system we are constructing that is inspired by the visual cortex. We present measured results from a 32 x 64 pixel prototype, which was fabricated in the TSMC0.25 μm process on a 3.84mm by 2.54mm die. Quiescent power dissipation was 3mW
    corecore