14 research outputs found
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor
(DAVIS), incorporate a conventional global-shutter camera and an event-based
sensor in the same pixel array. These sensors have great potential for
high-speed robotics and computer vision because they allow us to combine the
benefits of conventional cameras with those of event-based sensors: low
latency, high temporal resolution, and very high dynamic range. However, new
algorithms are required to exploit the sensor characteristics and cope with its
unconventional output, which consists of a stream of asynchronous brightness
changes (called "events") and synchronous grayscale frames. For this purpose,
we present and release a collection of datasets captured with a DAVIS in a
variety of synthetic and real environments, which we hope will motivate
research on new algorithms for high-speed and high-dynamic-range robotics and
computer-vision applications. In addition to global-shutter intensity images
and asynchronous events, we provide inertial measurements and ground-truth
camera poses from a motion-capture system. The latter allows comparing the pose
accuracy of ego-motion estimation algorithms quantitatively. All the data are
released both as standard text files and binary files (i.e., rosbag). This
paper provides an overview of the available data and describes a simulator that
we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table
Real-Time Panoramic Tracking for Event Cameras
Event cameras are a paradigm shift in camera technology. Instead of full
frames, the sensor captures a sparse set of events caused by intensity changes.
Since only the changes are transferred, those cameras are able to capture quick
movements of objects in the scene or of the camera itself. In this work we
propose a novel method to perform camera tracking of event cameras in a
panoramic setting with three degrees of freedom. We propose a direct camera
tracking formulation, similar to state-of-the-art in visual odometry. We show
that the minimal information needed for simultaneous tracking and mapping is
the spatial position of events, without using the appearance of the imaged
scene point. We verify the robustness to fast camera movements and dynamic
objects in the scene on a recently proposed dataset and self-recorded
sequences.Comment: Accepted to International Conference on Computational Photography
201
High Speed Event Camera TRacking
Event cameras are bioinspired sensors with reaction times in the order of
microseconds. This property makes them appealing for use in highly-dynamic
computer vision applications. In this work,we explore the limits of this
sensing technology and present an ultra-fast tracking algorithm able to
estimate six-degree-of-freedom motion with dynamics over 25.8 g, at a
throughput of 10 kHz,processing over a million events per second. Our method is
capable of tracking either camera motion or the motion of an object in front of
it, using an error-state Kalman filter formulated in a Lie-theoretic sense. The
method includes a robust mechanism for the matching of events with projected
line segments with very fast outlier rejection. Meticulous treatment of sparse
matrices is applied to achieve real-time performance. Different motion models
of varying complexity are considered for the sake of comparison and performance
analysi
Event-based Motion Segmentation with Spatio-Temporal Graph Cuts
Identifying independently moving objects is an essential task for dynamic
scene understanding. However, traditional cameras used in dynamic scenes may
suffer from motion blur or exposure artifacts due to their sampling principle.
By contrast, event-based cameras are novel bio-inspired sensors that offer
advantages to overcome such limitations. They report pixelwise intensity
changes asynchronously, which enables them to acquire visual information at
exactly the same rate as the scene dynamics. We develop a method to identify
independently moving objects acquired with an event-based camera, i.e., to
solve the event-based motion segmentation problem. We cast the problem as an
energy minimization one involving the fitting of multiple motion models. We
jointly solve two subproblems, namely event cluster assignment (labeling) and
motion model fitting, in an iterative manner by exploiting the structure of the
input event data in the form of a spatio-temporal graph. Experiments on
available datasets demonstrate the versatility of the method in scenes with
different motion patterns and number of moving objects. The evaluation shows
state-of-the-art results without having to predetermine the number of expected
moving objects. We release the software and dataset under an open source
licence to foster research in the emerging topic of event-based motion
segmentation
Event-aided Direct Sparse Odometry
We introduce EDS, a direct monocular visual odometry using events and frames.
Our algorithm leverages the event generation model to track the camera motion
in the blind time between frames. The method formulates a direct probabilistic
approach of observed brightness increments. Per-pixel brightness increments are
predicted using a sparse number of selected 3D points and are compared to the
events via the brightness increment error to estimate camera motion. The method
recovers a semi-dense 3D map using photometric bundle adjustment. EDS is the
first method to perform 6-DOF VO using events and frames with a direct
approach. By design, it overcomes the problem of changing appearance in
indirect methods. We also show that, for a target error performance, EDS can
work at lower frame rates than state-of-the-art frame-based VO solutions. This
opens the door to low-power motion-tracking applications where frames are
sparingly triggered "on demand" and our method tracks the motion in between. We
release code and datasets to the public.Comment: 16 pages, 14 Figures, Page: https://rpg.ifi.uzh.ch/ed
the event driven software library for yarp with algorithms and icub applications
Event-driven (ED) cameras are an emerging technology that sample the visual signal based on changes in the signal magnitude, rather than at a fixed-rate over time. The change in paradigm results in a camera with a lower latency, that uses less power, has reduced bandwidth, and higher dynamic range. Such cameras offer many potential advantages for on-line, autonomous, robots; however the sensor data does not directly integrate with current "image-based" frameworks and software libraries. The iCub robot uses Yet Another Robot Platform (YARP) as middleware to provide modular processing and connectivity to sensors and actuators. This paper introduces a library that incorporates an event-based framework into the YARP architecture, allowing event cameras to be used with the iCub (and other YARP-based) robots. We describe the philosophy and methods for structuring events to facilitate processing, while maintaining low-latency and real-time operation. We also describe several processing modules made available open-source, and three example demonstrations that can be run on the neuromorphic iCub
A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation
We present a unifying framework to solve several computer vision problems
with event cameras: motion, depth and optical flow estimation. The main idea of
our framework is to find the point trajectories on the image plane that are
best aligned with the event data by maximizing an objective function: the
contrast of an image of warped events. Our method implicitly handles data
association between the events, and therefore, does not rely on additional
appearance information about the scene. In addition to accurately recovering
the motion parameters of the problem, our framework produces motion-corrected
edge-like images with high dynamic range that can be used for further scene
analysis. The proposed method is not only simple, but more importantly, it is,
to the best of our knowledge, the first method that can be successfully applied
to such a diverse set of important vision tasks with event cameras.Comment: 16 pages, 16 figures. Video: https://youtu.be/KFMZFhi-9A
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved