827 research outputs found

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector

    Embodied neuromorphic intelligence

    Full text link
    The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies – from perception to motor control – represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations

    Selective Change Driven Imaging: A Biomimetic Visual Sensing Strategy

    Get PDF
    Selective Change Driven (SCD) Vision is a biologically inspired strategy for acquiring, transmitting and processing images that significantly speeds up image sensing. SCD vision is based on a new CMOS image sensor which delivers, ordered by the absolute magnitude of its change, the pixels that have changed after the last time they were read out. Moreover, the traditional full frame processing hardware and programming methodology has to be changed, as a part of this biomimetic approach, to a new processing paradigm based on pixel processing in a data flow manner, instead of full frame image processing

    A low-power integrated smart sensor with on-chip real-time image processing capabilities

    Get PDF
    A low-power, CMOS retina with real-time, pixel-level processing capabilities is presented. Features extraction and edge-enhancement are implemented with fully programmable 1D Gabor convolutions. An equivalent computation rate of 3 GOPS is obtained at the cost of very low-power consumption ( W per pixel), providing real-time performances ( microseconds for overall computation, ). Experimental results from the first realized prototype show a very good matching between measures and expected outputs

    Miniature curved artificial compound eyes.

    Get PDF
    International audienceIn most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories

    Neuromorphic object localization using resistive memories and ultrasonic transducers

    Full text link
    Real-world sensory-processing applications require compact, low-latency, and low-power computing systems. Enabled by their in-memory event-driven computing abilities, hybrid memristive-Complementary Metal-Oxide Semiconductor neuromorphic architectures provide an ideal hardware substrate for such tasks. To demonstrate the full potential of such systems, we propose and experimentally demonstrate an end-to-end sensory processing solution for a real-world object localization application. Drawing inspiration from the barn owl’s neuroanatomy, we developed a bio-inspired, event-driven object localization system that couples state-of-the-art piezoelectric micromachined ultrasound transducer sensors to a neuromorphic resistive memories-based computational map. We present measurement results from the fabricated system comprising resistive memories-based coincidence detectors, delay line circuits, and a full-custom ultrasound sensor. We use these experimental results to calibrate our system-level simulations. These simulations are then used to estimate the angular resolution and energy efficiency of the object localization model. The results reveal the potential of our approach, evaluated in orders of magnitude greater energy efficiency than a microcontroller performing the same task

    Bio-Inspired Optic Flow Sensors for Artificial Compound Eyes.

    Full text link
    Compound eyes in flying insects have been studied to reveal the mysterious cues of vision-based flying mechanisms inside the smallest flying creatures in nature. Especially, researchers in the robotic area have made efforts to transfer the findings into their less than palm-sized unmanned air vehicles, micro-air-vehicles (MAVs). The miniaturized artificial compound eye is one of the key components in this system to provide visual information for navigation. Multi-directional sensing and motion estimation capabilities can give wide field-of-view (FoV) optic flows up to 360 solid angle. By deciphering the wide FoV optic flows, relevant information on the self-status of flight is parsed and utilized for flight command generation. In this work, we realize the wide-field optic flow sensing in a pseudo-hemispherical configuration realized by mounting a number of 2D array optic flow sensors on a flexible PCB module. The flexible PCBs can be bent to form a compound eye shape by origami packaging. From this scheme, the multiple 2D optic flow sensors can provide a modular, expandable configuration to meet low power constraints. The 2D optic flow sensors satisfy the low power constraint by employing a novel bio-inspired algorithm. We have modified the conventional elementary motion detector (EMD), which is known to be a basic operational unit in the insect’s visual pathways. We have implemented a bio-inspired time-stamp-based algorithm in mixed-mode circuits for robust operation. By optimal partitioning of analog to digital signal domains, we can realize the algorithm mostly in digital domain in a column-parallel circuits. Only the feature extraction algorithm is incorporated inside a pixel in analog circuits. In addition, the sensors integrate digital peripheral circuits to provide modular expandability. The on-chip data compressor can reduce the data rate by a factor of 8, so that it can connect a total of 25 optic flow sensors in a 4-wired Serial Peripheral Interface (SPI) bus. The packaged compound eye can transmit full-resolution optic flow data through the single 3MB/sec SPI bus. The fabricated 2D optic flow prototype sensor has achieved the power consumption of 243.3pJ/pixel and the maximum detectable optic flow of 1.96rad/sec at 120fps and 60 FoV.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108841/1/sssjpark_1.pd
    corecore