1,056 research outputs found
Semantic grid map building
Conventional Occupancy Grid (OG) map which contains occupied and unoccupied cells can be enhanced by incorporating semantic labels of places to build semantic grid map. Map with semantic information is more understandable to humans and hence can be used for efficient communication, leading to effective human robot interactions. This paper proposes a new approach that enables a robot to explore an indoor environment to build an occupancy grid map and then perform semantic labeling to generate a semantic grid map. Geometrical information is obtained by classifying the places into three different semantic classes based on data collected by a 2D laser range finder. Classification is achieved by implementing logistic regression as a multi-class classifier, and the results are combined in a probabilistic framework. Labeling accuracy is further improved by topological correction on robot position map which is an intermediate product, and also by outlier removal process on semantic grid map. Simulation on data collected in a university environment shows appealing results
Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping,
navigation, and low-level scene segmentation. However, single data type maps are not enough
in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to
map on different levels the surrounding environment. It exploits the benefits of several data types,
counteracting the cons of each of the sensors. This research introduces several techniques to achieve
mapping and navigation through indoor environments. First, a scan matching algorithm based on
ICP with distance threshold association counter is used as a multi-objective-like fitness function.
Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A
global map is then built during SLAM, reducing the accumulated error and demonstrating better
results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in
2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different
heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding
occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition
system. Experiments are conducted in both simulated and real scenarios, proving the performance of
the proposed algorithms.This work was supported by the funding from HEROITEA: Heterogeneous Intelligent
Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish Ministerio
de Economia y Competitividad, RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation
Hub, S2018/NMT-4331, funded by āProgramas de Actividades I+D en la Comunidad de Madridā
and cofunded by Structural Funds of the EU.
We acknowledge the R&D&I project PLEC2021-007819 funded by MCIN/AEI/
10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and the Comunidad de
Madrid (Spain) under the multiannual agreement with Universidad Carlos III de Madrid (āExcelencia
para el Profesorado UniversitarioāāEPUC3M18) part of the fifth regional research plan 2016ā2020
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings
in cognitive psychology, our model is composed of layers representing maps at diļ¬erent levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Learning Topometric Semantic Maps from Occupancy Grids
Today's mobile robots are expected to operate in complex environments they
share with humans. To allow intuitive human-robot collaboration, robots require
a human-like understanding of their surroundings in terms of semantically
classified instances. In this paper, we propose a new approach for deriving
such instance-based semantic maps purely from occupancy grids. We employ a
combination of deep learning techniques to detect, segment and extract door
hypotheses from a random-sized map. The extraction is followed by a
post-processing chain to further increase the accuracy of our approach, as well
as place categorization for the three classes room, door and corridor. All
detected and classified entities are described as instances specified in a
common coordinate system, while a topological map is derived to capture their
spatial links. To train our two neural networks used for detection and map
segmentation, we contribute a simulator that automatically creates and
annotates the required training data. We further provide insight into which
features are learned to detect doorways, and how the simulated training data
can be augmented to train networks for the direct application on real-world
grid maps. We evaluate our approach on several publicly available real-world
data sets. Even though the used networks are solely trained on simulated data,
our approach demonstrates high robustness and effectiveness in various
real-world indoor environments.Comment: Presented at the 2019 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS
Knowledge Representation for Robots through Human-Robot Interaction
The representation of the knowledge needed by a robot to perform complex
tasks is restricted by the limitations of perception. One possible way of
overcoming this situation and designing "knowledgeable" robots is to rely on
the interaction with the user. We propose a multi-modal interaction framework
that allows to effectively acquire knowledge about the environment where the
robot operates. In particular, in this paper we present a rich representation
framework that can be automatically built from the metric map annotated with
the indications provided by the user. Such a representation, allows then the
robot to ground complex referential expressions for motion commands and to
devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP
201
Appearance-based localization for mobile robots using digital zoom and visual compass
This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally
- ā¦