2,819 research outputs found
Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments
Existing simultaneous localization and mapping (SLAM) algorithms are not
robust in challenging low-texture environments because there are only few
salient features. The resulting sparse or semi-dense map also conveys little
information for motion planning. Though some work utilize plane or scene layout
for dense map regularization, they require decent state estimation from other
sources. In this paper, we propose real-time monocular plane SLAM to
demonstrate that scene understanding could improve both state estimation and
dense mapping especially in low-texture environments. The plane measurements
come from a pop-up 3D plane model applied to each single image. We also combine
planes with point based SLAM to improve robustness. On a public TUM dataset,
our algorithm generates a dense semantic 3D model with pixel depth error of 6.2
cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our
method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS)
201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry
This work proposes a visual odometry method that combines points and plane
primitives, extracted from a noisy depth camera. Depth measurement uncertainty
is modelled and propagated through the extraction of geometric primitives to
the frame-to-frame motion estimation, where pose is optimized by weighting the
residuals of 3D point and planes matches, according to their uncertainties.
Results on an RGB-D dataset show that the combination of points and planes,
through the proposed method, is able to perform well in poorly textured
environments, where point-based odometry is bound to fail.Comment: Accepted to TAROS 201
Depth sensors in augmented reality solutions. Literature review
The emergence of depth sensors has made it possible to track – not only monocular
cues – but also the actual depth values of the environment. This is especially
useful in augmented reality solutions, where the position and orientation (pose) of
the observer need to be accurately determined. This allows virtual objects to be
installed to the view of the user through, for example, a screen of a tablet or augmented
reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have
been physically quite large, the size of these sensors is decreasing, and possibly –
eventually – a 3D sensor could be embedded – for example – to augmented reality
glasses. The wider subject area considered in this review is 3D SLAM methods,
which take advantage of the 3D information available by modern RGB-D sensors,
such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization
and Mapping) and 3D tracking in augmented reality is a timely subject. We also try
to find out the limitations and possibilities of different tracking methods, and how
they should be improved, in order to allow efficient integration of the methods to
the augmented reality solutions of the future.Siirretty Doriast
- …