2,382 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A micropower centroiding vision processor

    Get PDF
    Published versio

    A Foveated Silicon Retina for Two-Dimensional Tracking

    Get PDF
    A silicon retina chip with a central foveal region for smooth-pursuit tracking and a peripheral region for saccadic target acquisition is presented. The foveal region contains a 9 x 9 dense array of large dynamic range photoreceptors and edge detectors. Two-dimensional direction of foveal motion is computed outside the imaging array. The peripheral region contains a sparse array of 19 x 17 similar, but larger, photoreceptors with in-pixel edge and temporal ON-set detection. The coordinates of moving or flashing targets are computed with two one-dimensional centroid localization circuits located on the outskirts of the peripheral region. The chip is operational for ambient intensities ranging over six orders of magnitude, targets contrast as low as 10%, foveal speed ranging from 1.5 to 10K pixels/s, and peripheral ON-set frequencies from \u3c0.1 to 800 kHz. The chip is implemented in 2-ÎĽm N well CMOS process and consumes 15 mW (V dd = 4 V) in normal indoor light (25 ÎĽW/cm2). It has been used as a person tracker in a smart surveillance system and a road follower in an autonomous navigation system

    Human behavioural analysis with self-organizing map for ambient assisted living

    Get PDF
    This paper presents a system for automatically classifying the resting location of a moving object in an indoor environment. The system uses an unsupervised neural network (Self Organising Feature Map) fully implemented on a low-cost, low-power automated home-based surveillance system, capable of monitoring activity level of elders living alone independently. The proposed system runs on an embedded platform with a specialised ceiling-mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels and to detect specific events such as potential falls. First order motion information, including first order moving average smoothing, is generated from the 2D image coordinates (trajectories). A novel edge-based object detection algorithm capable of running at a reasonable speed on the embedded platform has been developed. The classification is dynamic and achieved in real-time. The dynamic classifier is achieved using a SOFM and a probabilistic model. Experimental results show less than 20% classification error, showing the robustness of our approach over others in literature with minimal power consumption. The head location of the subject is also estimated by a novel approach capable of running on any resource limited platform with power constraints

    A micropower vision processor for parallel object positioning and sizing

    No full text
    Accepted versio
    • …
    corecore