886 research outputs found

    On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    Get PDF
    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.ERANET PRI-PIMCHI- 2011-0768Ministerio de EconomĂ­a y Competitividad TEC2009-10639-C04-01, TEC2012-37868- C04-01Junta de AndalucĂ­a TIC-609

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Orientation-Selective VLSI Retina

    Get PDF
    In both biological and artificial pattern-recognition systems, the detection of oriented light-intensity edges is an important preprocessing step. We have constructed a silicon VLSI device containing an array of photoreceptors with additional hardware for computing center-surround (edge-enhanced) response as well as edge orientation at every point in the receptor lattice. Because computing the edge orientations in the array local to each photoreceptor would have made each pixel-computation unit too large (thereby reducing the resolution of the device), we devised a novel technique for computing the orientations outside of the array. All the transducers and computational elements are analog circuits made with a conventional CMOS process

    Neuromorphic Implementation of Orientation Hypercolumns

    Get PDF
    Neurons in the mammalian primary visual cortex are selective along multiple stimulus dimensions, including retinal position, spatial frequency, and orientation. Neurons tuned to different stimulus features but the same retinal position are grouped into retinotopic arrays of hypercolumns. This paper describes a neuromorphic implementation of orientation hypercolumns, which consists of a single silicon retina feeding multiple chips, each of which contains an array of neurons tuned to the same orientation and spatial frequency, but different retinal locations. All chips operate in continuous time, and communicate with each other using spikes transmitted by the address-event representation protocol. This system is modular in the sense that orientation coverage can be increased simply by adding more chips, and expandable in the sense that its output can be used to construct neurons tuned to other stimulus dimensions. We present measured results from the system, demonstrating neuronal selectivity along position, spatial frequency and orientation. We also demonstrate that the system supports recurrent feedback between neurons within one hypercolumn, even though they reside on different chips. The measured results from the system are in excellent concordance with theoretical predictions

    A recurrent model of orientation maps with simple and complex cells

    Get PDF
    We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. Unlike previous work, however, the map for orientation does not depend on the sign of contrast. Instead, sign-independent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. These two groups of orientation-selective cells are similar to complex and simple cells observed in V1

    A low-power integrated smart sensor with on-chip real-time image processing capabilities

    Get PDF
    A low-power, CMOS retina with real-time, pixel-level processing capabilities is presented. Features extraction and edge-enhancement are implemented with fully programmable 1D Gabor convolutions. An equivalent computation rate of 3 GOPS is obtained at the cost of very low-power consumption ( W per pixel), providing real-time performances ( microseconds for overall computation, ). Experimental results from the first realized prototype show a very good matching between measures and expected outputs
    • …
    corecore