348 research outputs found
Scalable Estimation of Precision Maps in a MapReduce Framework
This paper presents a large-scale strip adjustment method for LiDAR mobile
mapping data, yielding highly precise maps. It uses several concepts to achieve
scalability. First, an efficient graph-based pre-segmentation is used, which
directly operates on LiDAR scan strip data, rather than on point clouds.
Second, observation equations are obtained from a dense matching, which is
formulated in terms of an estimation of a latent map. As a result of this
formulation, the number of observation equations is not quadratic, but rather
linear in the number of scan strips. Third, the dynamic Bayes network, which
results from all observation and condition equations, is partitioned into two
sub-networks. Consequently, the estimation matrices for all position and
orientation corrections are linear instead of quadratic in the number of
unknowns and can be solved very efficiently using an alternating least squares
approach. It is shown how this approach can be mapped to a standard key/value
MapReduce implementation, where each of the processing nodes operates
independently on small chunks of data, leading to essentially linear
scalability. Results are demonstrated for a dataset of one billion measured
LiDAR points and 278,000 unknowns, leading to maps with a precision of a few
millimeters.Comment: ACM SIGSPATIAL'16, October 31-November 03, 2016, Burlingame, CA, US
A ROBUST RGB-D SLAM SYSTEM FOR 3D ENVIRONMENT WITH PLANAR SURFACES
Simultaneous localization and mapping is the technique to construct a 3D map of unknown environment. With the increasing popularity of RGB-depth (RGB-D) sensors such as the Microsoft Kinect, there have been much research on capturing and reconstructing 3D environments using a movable RGB-D sensor. The key process behind these kinds of simultaneous location and mapping (SLAM) systems is the iterative closest point or ICP algorithm, which is an iterative algorithm that can estimate the rigid movement of the camera based on the captured 3D point clouds. While ICP is a well-studied algorithm, it is problematic when it is used in scanning large planar regions such as wall surfaces in a room. The lack of depth variations on planar surfaces makes the global alignment an ill-conditioned problem. In this thesis, we present a novel approach for registering 3D point clouds by combining both color and depth information. Instead of directly searching for point correspondences among 3D data, the proposed method first extracts features from the RGB images, and then back-projects the features to the 3D space to identify more reliable correspondences. These color correspondences form the initial input to the ICP procedure which then proceeds to refine the alignment. Experimental results show that our proposed approach can achieve better accuracy than existing SLAMs in reconstructing indoor environments with large planar surfaces
Distributed 3D TSDF Manifold Mapping for Multi-Robot Systems
International audienceThis paper presents a new method to perform collaborative real-time dense 3D mapping in a distributed way for a multi-robot system. This method associates a Truncated Signed Distance Function (TSDF) representation with a manifold structure. Each robot owns a private map which is composed of a collection of local TSDF sub-maps called patches that are locally consistent. This private map can be shared to build a public map collecting all the patches created by the robots of the fleet. In order to maintain consistency in the global map, a mechanism of patch alignment and fusion has been added. This work has been integrated in real-time into a mapping stack, which can be used for autonomous navigation in unknown and cluttered environment. Experimental results on a team of wheeled mobile robots are reported to demonstrate the practical interest of the proposed system, in particular for the exploration of unknown areas
Creating Simplified 3D Models with High Quality Textures
This paper presents an extension to the KinectFusion algorithm which allows
creating simplified 3D models with high quality RGB textures. This is achieved
through (i) creating model textures using images from an HD RGB camera that is
calibrated with Kinect depth camera, (ii) using a modified scheme to update
model textures in an asymmetrical colour volume that contains a higher number
of voxels than that of the geometry volume, (iii) simplifying dense polygon
mesh model using quadric-based mesh decimation algorithm, and (iv) creating and
mapping 2D textures to every polygon in the output 3D model. The proposed
method is implemented in real-time by means of GPU parallel processing.
Visualization via ray casting of both geometry and colour volumes provides
users with a real-time feedback of the currently scanned 3D model. Experimental
results show that the proposed method is capable of keeping the model texture
quality even for a heavily decimated model and that, when reconstructing small
objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques
and Applications (DICTA), Page 1 -
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Depth sensors in augmented reality solutions. Literature review
The emergence of depth sensors has made it possible to track – not only monocular
cues – but also the actual depth values of the environment. This is especially
useful in augmented reality solutions, where the position and orientation (pose) of
the observer need to be accurately determined. This allows virtual objects to be
installed to the view of the user through, for example, a screen of a tablet or augmented
reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have
been physically quite large, the size of these sensors is decreasing, and possibly –
eventually – a 3D sensor could be embedded – for example – to augmented reality
glasses. The wider subject area considered in this review is 3D SLAM methods,
which take advantage of the 3D information available by modern RGB-D sensors,
such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization
and Mapping) and 3D tracking in augmented reality is a timely subject. We also try
to find out the limitations and possibilities of different tracking methods, and how
they should be improved, in order to allow efficient integration of the methods to
the augmented reality solutions of the future.Siirretty Doriast
Depth sensors in augmented reality solutions. Literature review
The emergence of depth sensors has made it possible to track – not only monocular
cues – but also the actual depth values of the environment. This is especially
useful in augmented reality solutions, where the position and orientation (pose) of
the observer need to be accurately determined. This allows virtual objects to be
installed to the view of the user through, for example, a screen of a tablet or augmented
reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have
been physically quite large, the size of these sensors is decreasing, and possibly –
eventually – a 3D sensor could be embedded – for example – to augmented reality
glasses. The wider subject area considered in this review is 3D SLAM methods,
which take advantage of the 3D information available by modern RGB-D sensors,
such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization
and Mapping) and 3D tracking in augmented reality is a timely subject. We also try
to find out the limitations and possibilities of different tracking methods, and how
they should be improved, in order to allow efficient integration of the methods to
the augmented reality solutions of the future.Siirretty Doriast
- …