2,599 research outputs found
Semantic 3D Occupancy Mapping through Efficient High Order CRFs
Semantic 3D mapping can be used for many applications such as robot
navigation and virtual interaction. In recent years, there has been great
progress in semantic segmentation and geometric 3D mapping. However, it is
still challenging to combine these two tasks for accurate and large-scale
semantic mapping from images. In the paper, we propose an incremental and
(near) real-time semantic mapping system. A 3D scrolling occupancy grid map is
built to represent the world, which is memory and computationally efficient and
bounded for large scale environments. We utilize the CNN segmentation as prior
prediction and further optimize 3D grid labels through a novel CRF model.
Superpixels are utilized to enforce smoothness and form robust P N high order
potential. An efficient mean field inference is developed for the graph
optimization. We evaluate our system on the KITTI dataset and improve the
segmentation accuracy by 10% over existing systems.Comment: IROS 201
Dense 3D Object Reconstruction from a Single Depth View
In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs
the complete 3D structure of a given object from a single arbitrary depth view
using generative adversarial networks. Unlike existing work which typically
requires multiple views of the same object or class labels to recover the full
3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation
of a depth view of the object as input, and is able to generate the complete 3D
occupancy grid with a high resolution of 256^3 by recovering the
occluded/missing regions. The key idea is to combine the generative
capabilities of autoencoders and the conditional Generative Adversarial
Networks (GAN) framework, to infer accurate and fine-grained 3D structures of
objects in high-dimensional voxel space. Extensive experiments on large
synthetic datasets and real-world Kinect datasets show that the proposed
3D-RecGAN++ significantly outperforms the state of the art in single view 3D
object reconstruction, and is able to reconstruct unseen types of objects.Comment: TPAMI 2018. Code and data are available at:
https://github.com/Yang7879/3D-RecGAN-extended. This article extends from
arXiv:1708.0796
SkiMap: An Efficient Mapping Framework for Robot Navigation
We present a novel mapping framework for robot navigation which features a
multi-level querying system capable to obtain rapidly representations as
diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These
are inherently embedded into a memory and time efficient core data structure
organized as a Tree of SkipLists. Compared to the well-known Octree
representation, our approach exhibits a better time efficiency, thanks to its
simple and highly parallelizable computational structure, and a similar memory
footprint when mapping large workspaces. Peculiarly within the realm of mapping
for robot navigation, our framework supports realtime erosion and
re-integration of measurements upon reception of optimized poses from the
sensor tracker, so as to improve continuously the accuracy of the map.Comment: Accepted by International Conference on Robotics and Automation
(ICRA) 2017. This is the submitted version. The final published version may
be slightly differen
Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps
Visual robot navigation within large-scale, semi-structured environments
deals with various challenges such as computation intensive path planning
algorithms or insufficient knowledge about traversable spaces. Moreover, many
state-of-the-art navigation approaches only operate locally instead of gaining
a more conceptual understanding of the planning objective. This limits the
complexity of tasks a robot can accomplish and makes it harder to deal with
uncertainties that are present in the context of real-time robotics
applications. In this work, we present Topomap, a framework which simplifies
the navigation task by providing a map to the robot which is tailored for path
planning use. This novel approach transforms a sparse feature-based map from a
visual Simultaneous Localization And Mapping (SLAM) system into a
three-dimensional topological map. This is done in two steps. First, we extract
occupancy information directly from the noisy sparse point cloud. Then, we
create a set of convex free-space clusters, which are the vertices of the
topological map. We show that this representation improves the efficiency of
global planning, and we provide a complete derivation of our algorithm.
Planning experiments on real world datasets demonstrate that we achieve similar
performance as RRT* with significantly lower computation times and storage
requirements. Finally, we test our algorithm on a mobile robotic platform to
prove its advantages.Comment: 8 page
Using Lidar Intensity for Robot Navigation
We present Multi-Layer Intensity Map, a novel 3D object representation for
robot perception and autonomous navigation. Intensity maps consist of multiple
stacked layers of 2D grid maps each derived from reflected point cloud
intensities corresponding to a certain height interval. The different layers of
intensity maps can be used to simultaneously estimate obstacles' height,
solidity/density, and opacity. We demonstrate that intensity maps' can help
accurately differentiate obstacles that are safe to navigate through (e.g.
beaded/string curtains, pliable tall grass), from ones that must be avoided
(e.g. transparent surfaces such as glass walls, bushes, trees, etc.) in indoor
and outdoor environments. Further, to handle narrow passages, and navigate
through non-solid obstacles in dense environments, we propose an approach to
adaptively inflate or enlarge the obstacles detected on intensity maps based on
their solidity, and the robot's preferred velocity direction. We demonstrate
these improved navigation capabilities in real-world narrow, dense environments
using a real Turtlebot and Boston Dynamics Spot robots. We observe significant
increases in success rates to more than 50%, up to a 9.5% decrease in
normalized trajectory length, and up to a 22.6% increase in the F-score
compared to current navigation methods using other sensor modalities.Comment: 9 pages, 7 figure
Active Mapping and Robot Exploration: A Survey
Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government
- …