936 research outputs found
Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping,
navigation, and low-level scene segmentation. However, single data type maps are not enough
in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to
map on different levels the surrounding environment. It exploits the benefits of several data types,
counteracting the cons of each of the sensors. This research introduces several techniques to achieve
mapping and navigation through indoor environments. First, a scan matching algorithm based on
ICP with distance threshold association counter is used as a multi-objective-like fitness function.
Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A
global map is then built during SLAM, reducing the accumulated error and demonstrating better
results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in
2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different
heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding
occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition
system. Experiments are conducted in both simulated and real scenarios, proving the performance of
the proposed algorithms.This work was supported by the funding from HEROITEA: Heterogeneous Intelligent
Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish Ministerio
de Economia y Competitividad, RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation
Hub, S2018/NMT-4331, funded by āProgramas de Actividades I+D en la Comunidad de Madridā
and cofunded by Structural Funds of the EU.
We acknowledge the R&D&I project PLEC2021-007819 funded by MCIN/AEI/
10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and the Comunidad de
Madrid (Spain) under the multiannual agreement with Universidad Carlos III de Madrid (āExcelencia
para el Profesorado UniversitarioāāEPUC3M18) part of the fifth regional research plan 2016ā2020
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings
in cognitive psychology, our model is composed of layers representing maps at diļ¬erent levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Learning Topometric Semantic Maps from Occupancy Grids
Today's mobile robots are expected to operate in complex environments they
share with humans. To allow intuitive human-robot collaboration, robots require
a human-like understanding of their surroundings in terms of semantically
classified instances. In this paper, we propose a new approach for deriving
such instance-based semantic maps purely from occupancy grids. We employ a
combination of deep learning techniques to detect, segment and extract door
hypotheses from a random-sized map. The extraction is followed by a
post-processing chain to further increase the accuracy of our approach, as well
as place categorization for the three classes room, door and corridor. All
detected and classified entities are described as instances specified in a
common coordinate system, while a topological map is derived to capture their
spatial links. To train our two neural networks used for detection and map
segmentation, we contribute a simulator that automatically creates and
annotates the required training data. We further provide insight into which
features are learned to detect doorways, and how the simulated training data
can be augmented to train networks for the direct application on real-world
grid maps. We evaluate our approach on several publicly available real-world
data sets. Even though the used networks are solely trained on simulated data,
our approach demonstrates high robustness and effectiveness in various
real-world indoor environments.Comment: Presented at the 2019 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS
A review on deep learning techniques for 3D sensed data classification
Over the past decade deep learning has driven progress in 2D image
understanding. Despite these advancements, techniques for automatic 3D sensed
data understanding, such as point clouds, is comparatively immature. However,
with a range of important applications from indoor robotics navigation to
national scale remote sensing there is a high demand for algorithms that can
learn to automatically understand and classify 3D sensed data. In this paper we
review the current state-of-the-art deep learning architectures for processing
unstructured Euclidean data. We begin by addressing the background concepts and
traditional methodologies. We review the current main approaches including;
RGB-D, multi-view, volumetric and fully end-to-end architecture designs.
Datasets for each category are documented and explained. Finally, we give a
detailed discussion about the future of deep learning for 3D sensed data, using
literature to justify the areas where future research would be most valuable.Comment: 25 pages, 9 figures. Review pape
- ā¦