792 research outputs found
LDSO: Direct Sparse Odometry with Loop Closure
In this paper we present an extension of Direct Sparse Odometry (DSO) to a
monocular visual SLAM system with loop closure detection and pose-graph
optimization (LDSO). As a direct technique, DSO can utilize any image pixel
with sufficient intensity gradient, which makes it robust even in featureless
areas. LDSO retains this robustness, while at the same time ensuring
repeatability of some of these points by favoring corner features in the
tracking frontend. This repeatability allows to reliably detect loop closure
candidates with a conventional feature-based bag-of-words (BoW) approach. Loop
closure candidates are verified geometrically and Sim(3) relative pose
constraints are estimated by jointly minimizing 2D and 3D geometric error
terms. These constraints are fused with a co-visibility graph of relative poses
extracted from DSO's sliding window optimization. Our evaluation on publicly
available datasets demonstrates that the modified point selection strategy
retains the tracking accuracy and robustness, and the integrated pose-graph
optimization significantly reduces the accumulated rotation-, translation- and
scale-drift, resulting in an overall performance comparable to state-of-the-art
feature-based systems, even without global bundle adjustment
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …