78 research outputs found

    Computing motion using analog VLSI vision chips: An experimental comparison among different approaches

    Get PDF
    We have designed, built and tested a number of analog CMOS VLSI circuits for computing 1-D motion from the time-varying intensity values provided by an array of on-chip phototransistors. We present experimental data for two such circuits and discuss their relative performance. One circuit approximates the correlation model while a second chip uses resistive grids to compute zero-crossings to be tracked over time by a separate digital processor. Both circuits integrate image acquisition with image processing functions and compute velocity in real time. For comparison, we also describe the performance of a simple motion algorithm using off-the-shelf digital components. We conclude that analog circuits implementing various correlation-like motion algorithms are more robust than our previous analog circuits implementing gradient-like motion algorithms

    A micropower centroiding vision processor

    Get PDF
    Published versio

    Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Get PDF
    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation

    Sun Sensor Based on a Luminance Spiking Pixel Array

    Get PDF
    We present a novel sun sensor concept. It is the very first sun sensor built with an address event representation spiking pixel matrix. Its pixels spike with a frequency proportional to illumination. It offers remarkable advantages over conventional digital sun sensors based on active pixel sensor (APS) pixels. Its output data flow is quite reduced. It is possible to resolve the sun position just receiving one single event operating in time-to-first-spike mode. It operates with a latency in the order of milliseconds. It has higher dynamic range than APS image sensors (higher than 100 dB). A custom algorithm to compute the centroid of the illuminated pixels is presented. Experimental results are provided.Universidad de CĂĄdiz PR2016-072Ministerio de EconomĂ­a y Competitividad TEC2015-66878-C3-1-RJunta de AndalucĂ­a TIC 2012- 2338Office of Naval Research (USA) N00014141035

    A micropower vision processor for parallel object positioning and sizing

    No full text
    Accepted versio

    Algorithms for VLSI stereo vision circuits applied to autonomous robots

    Get PDF
    Since the inception of Robotics, visual information has been incorporated in order to allow the robots to perform tasks that require an interaction with their environment, particularly when it is a changing environment. Depth perception is a most useful information for a mobile robot to navigate in its environment and interact with its surroundings. Among the different methods capable of measuring the distance to the objects in the scene, stereo vision is the most advantageous for a small, mobile robot with limited energy and computational power. Stereoscopy implies a low power consumption because it uses passive sensors and it does not require the robot to move. Furthermore, it is more robust, because it does not require a complex optic system with moving elements. On the other hand, stereo vision is computationally intensive. Objects in the scene have to be detected and matched across images. Biological sensory systems are based on simple computational elements that process information in parallel and communicate among them. Analog VLSI chips are an ideal substrate to mimic the massive parallelism and collective computation present in biological nervous systems. For mobile robotics they have the added advantage of low power consumption and high computational power, thus freeing the CPU for other tasks. This dissertation discusses two stereoscopic methods that are based on simple, parallel cal- culations requiring communication only among neighboring processing units (local communication). Algorithms with these properties are easy to implement in analog VLSI and they are also very convenient for digital systems. The first algorithm is phase-based. Disparity, i.e., the spatial shift between left and right images, is recovered as a phase shift in the spatial-frequency domain. GĂĄbor functions are used to recover the frequency spectrum of the image because of their optimum joint spatial and spatial-frequency properties. The GĂĄbor-based algorithm is discussed and tested on a Khepera miniature mobile robot. Two further approximations are introduced to ease the analog VLSI and digital implementations. The second stereoscopic algorithm is difference-based. Disparity is recovered by a simple calculation using the image differences and their spatial derivatives. The algorithm is simulated on a digital system and an analog VLSI implementation is proposed and discussed. The thesis concludes with the description of some tools used in this research project. A stereo vision system has been developed for the Webots mobile robotics simulator, to simplify the testing of different stereo algorithms. Similarly, two stereo vision turrets have been built for the Khepera robot

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Vector Disparity Sensor with Vergence Control for Active Vision Systems

    Get PDF
    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system

    Software Defined Multi-Spectral Imaging for Arctic Sensor Networks

    Get PDF
    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016
    • 

    corecore