18 research outputs found
A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation
We present a unifying framework to solve several computer vision problems
with event cameras: motion, depth and optical flow estimation. The main idea of
our framework is to find the point trajectories on the image plane that are
best aligned with the event data by maximizing an objective function: the
contrast of an image of warped events. Our method implicitly handles data
association between the events, and therefore, does not rely on additional
appearance information about the scene. In addition to accurately recovering
the motion parameters of the problem, our framework produces motion-corrected
edge-like images with high dynamic range that can be used for further scene
analysis. The proposed method is not only simple, but more importantly, it is,
to the best of our knowledge, the first method that can be successfully applied
to such a diverse set of important vision tasks with event cameras.Comment: 16 pages, 16 figures. Video: https://youtu.be/KFMZFhi-9A
Event-based Motion Segmentation with Spatio-Temporal Graph Cuts
Identifying independently moving objects is an essential task for dynamic
scene understanding. However, traditional cameras used in dynamic scenes may
suffer from motion blur or exposure artifacts due to their sampling principle.
By contrast, event-based cameras are novel bio-inspired sensors that offer
advantages to overcome such limitations. They report pixelwise intensity
changes asynchronously, which enables them to acquire visual information at
exactly the same rate as the scene dynamics. We develop a method to identify
independently moving objects acquired with an event-based camera, i.e., to
solve the event-based motion segmentation problem. We cast the problem as an
energy minimization one involving the fitting of multiple motion models. We
jointly solve two subproblems, namely event cluster assignment (labeling) and
motion model fitting, in an iterative manner by exploiting the structure of the
input event data in the form of a spatio-temporal graph. Experiments on
available datasets demonstrate the versatility of the method in scenes with
different motion patterns and number of moving objects. The evaluation shows
state-of-the-art results without having to predetermine the number of expected
moving objects. We release the software and dataset under an open source
licence to foster research in the emerging topic of event-based motion
segmentation