8 research outputs found
CED: Color Event Camera Dataset
Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop
DSEC: A Stereo Event Camera Dataset for Driving Scenarios
Once an academic venture, autonomous driving has received unparalleled
corporate funding in the last decade. Still, the operating conditions of
current autonomous cars are mostly restricted to ideal scenarios. This means
that driving in challenging illumination conditions such as night, sunrise, and
sunset remains an open problem. In these cases, standard cameras are being
pushed to their limits in terms of low light and high dynamic range
performance. To address these challenges, we propose, DSEC, a new dataset that
contains such demanding illumination conditions and provides a rich set of
sensory data. DSEC offers data from a wide-baseline stereo setup of two color
frame cameras and two high-resolution monochrome event cameras. In addition, we
collect lidar data and RTK GPS measurements, both hardware synchronized with
all camera data. One of the distinctive features of this dataset is the
inclusion of high-resolution event cameras. Event cameras have received
increasing attention for their high temporal resolution and high dynamic range
performance. However, due to their novelty, event camera datasets in driving
scenarios are rare. This work presents the first high-resolution, large-scale
stereo dataset with event cameras. The dataset contains 53 sequences collected
by driving in a variety of illumination conditions and provides ground truth
disparity for the development and evaluation of event-based stereo algorithms.Comment: IEEE Robotics and Automation Letter
An Asynchronous Kalman Filter for Hybrid Event Cameras
Event cameras are ideally suited to capture HDR visual information without
blur but perform poorly on static or slowly changing scenes. Conversely,
conventional image sensors measure absolute intensity of slowly changing scenes
effectively but do poorly on high dynamic range or quickly changing scenes. In
this paper, we present an event-based video reconstruction pipeline for High
Dynamic Range (HDR) scenarios. The proposed algorithm includes a frame
augmentation pre-processing step that deblurs and temporally interpolates frame
data using events. The augmented frame and event data are then fused using a
novel asynchronous Kalman filter under a unifying uncertainty model for both
sensors. Our experimental results are evaluated on both publicly available
datasets with challenging lighting conditions and fast motions and our new
dataset with HDR reference. The proposed algorithm outperforms state-of-the-art
methods in both absolute intensity error (48% reduction) and image similarity
indexes (average 11% improvement).Comment: 12 pages, 6 figures, published in International Conference on
Computer Vision (ICCV) 202
Event Camera Calibration of Per-pixel Biased Contrast Threshold
Event cameras output asynchronous events to represent intensity changes with a high temporal resolution, even under extreme lighting conditions. Currently, most of the existing works use a single contrast threshold to estimate the intensity change of all pixels. However, complex circuit bias and manufacturing imperfections cause biased pixels and mismatch contrast threshold among pixels, which may lead to undesirable outputs. In this paper, we propose a new event camera model and two calibration approaches which cover event-only cameras and hybrid image-event cameras. When intensity images are simultaneously provided along with events, we also propose an efficient online method to calibrate event cameras that adapts to time-varying event rates. We demonstrate the advantages of our proposed methods compared to the state-of-the-art on several different event camera dataset
Asynchronous Spatial Image Convolutions for Event Cameras
Spatial convolution is arguably the most fundamental of two-dimensional image processing operations. Conventional spatial image convolution can only be applied to a conventional image, that is, an array of pixel values (or similar image representation) that are associated with a single instant in time. Event cameras have serial, asynchronous output with no natural notion of an image frame, and each event arrives with a different timestamp. In this letter, we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera. The approach operates on the event stream output of the camera directly without synthesising pseudoimage frames as is common in the literature. The key idea is the introduction of an internal state that directly encodes the convolved image information, which is updated asynchronously as each event arrives from the camera. The state can be read off as often as and whenever required for
use in higher level vision algorithms for real-time robotic systems. We demonstrate the application of our method to corner detection, providing an implementation of a Harris corner-response “state” that can be used in real time for feature detection and tracking on robotic systems.This work was supported in part by the Australian Government Research Training Program Scholarship and in part by the Australian Research Council through the “Australian Centre of Excellence for Robotic Vision” under Grant CE140100016