4,451 research outputs found
EDFLOW: Event Driven Optical Flow Camera with Keypoint Detection and Adaptive Block Matching
Event cameras such as the Dynamic Vision Sensor (DVS) are useful because of their low latency, sparse output, and high dynamic range. In this paper, we propose a DVS+FPGA camera platform and use it to demonstrate the hardware implementation of event-based corner keypoint detection and adaptive block-matching optical flow. To adapt sample rate dynamically, events are accumulated in event slices using the area event count slice exposure method. The area event count is feedback controlled by the average optical flow matching distance. Corners are detected by streaks of accumulated events on event slice rings of radius 3 and 4 pixels. Corner detection takes about 6 clock cycles (16 MHz event rate at the 100MHz clock frequency) At the corners, flow vectors are computed in 100 clock cycles (1 MHz event rate). The multiscale block match size is 25x25 pixels and the flow vectors span up to 30-pixel match distance. The FPGA processes the sum-of-absolute distance block matching at 123 GOp/s, the equivalent of 1230 Op/clock cycle. EDFLOW is several times more accurate on MVSEC drone and driving optical flow benchmarking sequences than the previous best DVS FPGA optical flow implementation, and achieves similar accuracy to the CNN-based EV-Flownet, although it burns about 100 times less power. The EDFLOW design and benchmarking videos are available at https://sites.google.com/view/edflow21/home
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Parametric Dense Stereovision Implementation on a System-on Chip (SoC)
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time
CED: Color Event Camera Dataset
Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop
Fusing Frame and Event Vision for High-speed Optical Flow for Edge Application
Optical flow computation with frame-based cameras provides high accuracy but
the speed is limited either by the model size of the algorithm or by the frame
rate of the camera. This makes it inadequate for high-speed applications. Event
cameras provide continuous asynchronous event streams overcoming the frame-rate
limitation. However, the algorithms for processing the data either borrow frame
like setup limiting the speed or suffer from lower accuracy. We fuse the
complementary accuracy and speed advantages of the frame and event-based
pipelines to provide high-speed optical flow while maintaining a low error
rate. Our bio-mimetic network is validated with the MVSEC dataset showing 19%
error degradation at 4x speed up. We then demonstrate the system with a
high-speed drone flight scenario where a high-speed event camera computes the
flow even before the optical camera sees the drone making it suited for
applications like tracking and segmentation. This work shows the fundamental
trade-offs in frame-based processing may be overcome by fusing data from other
modalities
Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors
Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art
collision avoidance systems (CAS). However, the detection of obstacles
especially during night-time is still a challenging task since the lighting
conditions are not sufficient for traditional cameras to function properly.
Therefore, we exploit the powerful attributes of event-based cameras to perform
obstacle detection in low lighting conditions. Event cameras trigger events
asynchronously at high output temporal rate with high dynamic range of up to
120 . The algorithm filters background activity noise and extracts objects
using robust Hough transform technique. The depth of each detected object is
computed by triangulating 2D features extracted utilising LC-Harris. Finally,
asynchronous adaptive collision avoidance (AACA) algorithm is applied for
effective avoidance. Qualitative evaluation is compared using event-camera and
traditional camera.Comment: Accepted to IEEE SENSORS 202
Free as a Bird: Event-Based Dynamic Sense-and-Avoid for Ornithopter Robot Flight
Autonomous flight of flapping-wing robots is a major challenge for robot perception. Most of the previous sense-and-avoid works have studied the problem of obstacle avoidance for flapping-wing robots considering only static obstacles. This letter presents a fully onboard dynamic sense-and-avoid scheme for large-scale ornithopters using event cameras. These sensors trigger pixel information due to changes of illumination in the scene such as those produced by dynamic objects. The method performs event-by-event processing in low-cost hardware such as those onboard small aerial vehicles. The proposed scheme detects obstacles and evaluates possible collisions with the robot body. The onboard controller actuates over the horizontal and vertical tail deflections to execute the avoidance maneuver. The scheme is validated in both indoor and outdoor scenarios using obstacles of different shapes and sizes. To the best of the authors’ knowledge, this is the first event-based method for dynamic obstacle avoidance in a flapping-wing robot.Consejo Europeo de Investigación (ERC) 788247Comisión Europea - Proyecto AERIAL-CORE H2020-2019-871479Ministerio de Universidades FPU19/0469
- …