4,295 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    A Foveated Silicon Retina for Two-Dimensional Tracking

    Get PDF
    A silicon retina chip with a central foveal region for smooth-pursuit tracking and a peripheral region for saccadic target acquisition is presented. The foveal region contains a 9 x 9 dense array of large dynamic range photoreceptors and edge detectors. Two-dimensional direction of foveal motion is computed outside the imaging array. The peripheral region contains a sparse array of 19 x 17 similar, but larger, photoreceptors with in-pixel edge and temporal ON-set detection. The coordinates of moving or flashing targets are computed with two one-dimensional centroid localization circuits located on the outskirts of the peripheral region. The chip is operational for ambient intensities ranging over six orders of magnitude, targets contrast as low as 10%, foveal speed ranging from 1.5 to 10K pixels/s, and peripheral ON-set frequencies from \u3c0.1 to 800 kHz. The chip is implemented in 2-μm N well CMOS process and consumes 15 mW (V dd = 4 V) in normal indoor light (25 μW/cm2). It has been used as a person tracker in a smart surveillance system and a road follower in an autonomous navigation system

    Development of a miniature robot for swarm robotic application

    Get PDF
    Biological swarm is a fascinating behavior of nature that has been successfully applied to solve human problem especially for robotics application. The high economical cost and large area required to execute swarm robotics scenarios does not permit experimentation with real robot. Model and simulation of the mass number of these robots are extremely complex and often inaccurate. This paper describes the design decision and presents the development of an autonomous miniature mobile-robot (AMiR) for swarm robotics research and education. The large number of robot in these systems allows designing an individual AMiR unit with simple perception and mobile abilities. Hence a large number of robots can be easily and economically feasible to be replicated. AMiR has been designed as a complete platform with supporting software development tools for robotics education and researches in the Department of Computer and Communication Systems Engineering, UPM. The experimental results demonstrate the feasibility of using this robot to implement swarm robotic applications

    A Robust Analog VLSI Reichardt Motion Sensor

    Get PDF
    Silicon imagers with integrated motion-detection circuitry have been developed and tested for the past 15 years. Many previous circuits estimate motion by identifying and tracking spatial or temporal features. These approaches are prone to failure at low SNR conditions, where feature detection becomes unreliable. An alternate approach to motion detection is an intensity-based spatiotemporal correlation algorithm, such as the one proposed by Hassenstein and Reichardt in 1956 to explain aspects of insect vision. We implemented a Reichardt motion sensor with integrated photodetectors in a standard CMOS process. Our circuit operates at sub-microwatt power levels, the lowest reported for any motion sensor. We measure the effects of device mismatch on these parallel, analog circuits to show they are suitable for constructing 2-D VLSI arrays. Traditional correlation-based sensors suffer from strong contrast dependence. We introduce a circuit architecture that lessens this dependence. We also demonstrate robust performance of our sensor to complex stimuli in the presence of spatial and temporal noise

    Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review

    Get PDF
    Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models

    Selective Change Driven Imaging: A Biomimetic Visual Sensing Strategy

    Get PDF
    Selective Change Driven (SCD) Vision is a biologically inspired strategy for acquiring, transmitting and processing images that significantly speeds up image sensing. SCD vision is based on a new CMOS image sensor which delivers, ordered by the absolute magnitude of its change, the pixels that have changed after the last time they were read out. Moreover, the traditional full frame processing hardware and programming methodology has to be changed, as a part of this biomimetic approach, to a new processing paradigm based on pixel processing in a data flow manner, instead of full frame image processing
    corecore