1,391 research outputs found
Deep Network Uncertainty Maps for Indoor Navigation
Most mobile robots for indoor use rely on 2D laser scanners for localization,
mapping and navigation. These sensors, however, cannot detect transparent
surfaces or measure the full occupancy of complex objects such as tables. Deep
Neural Networks have recently been proposed to overcome this limitation by
learning to estimate object occupancy. These estimates are nevertheless subject
to uncertainty, making the evaluation of their confidence an important issue
for these measures to be useful for autonomous navigation and mapping. In this
work we approach the problem from two sides. First we discuss uncertainty
estimation in deep models, proposing a solution based on a fully convolutional
neural network. The proposed architecture is not restricted by the assumption
that the uncertainty follows a Gaussian model, as in the case of many popular
solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout.
We present results showing that uncertainty over obstacle distances is actually
better modeled with a Laplace distribution. Then, we propose a novel approach
to build maps based on Deep Neural Network uncertainty models. In particular,
we present an algorithm to build a map that includes information over obstacle
distance estimates while taking into account the level of uncertainty in each
estimate. We show how the constructed map can be used to increase global
navigation safety by planning trajectories which avoid areas of high
uncertainty, enabling higher autonomy for mobile robots in indoor settings.Comment: Accepted for publication in "2019 IEEE-RAS International Conference
on Humanoid Robots (Humanoids)
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings
in cognitive psychology, our model is composed of layers representing maps at diļ¬erent levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Recognising, Representing and Mapping Natural Features in Unstructured Environments
This thesis addresses the problem of building statistical models for multi-sensor perception in unstructured outdoor environments. The perception problem is divided into three distinct tasks: recognition, representation and association. Recognition is cast as a statistical classification problem where inputs are images or a combination of images and ranging information. Given the complexity and variability of natural environments, this thesis investigates the use of Bayesian statistics and supervised dimensionality reduction to incorporate prior information and fuse sensory data. A compact probabilistic representation of natural objects is essential for many problems in field robotics. This thesis presents techniques for combining non-linear dimensionality reduction with parametric learning through Expectation Maximisation to build general representations of natural features. Once created these models need to be rapidly processed to account for incoming information. To this end, techniques for efficient probabilistic inference are proposed. The robustness of localisation and mapping algorithms is directly related to reliable data association. Conventional algorithms employ only geometric information which can become inconsistent for large trajectories. A new data association algorithm incorporating visual and geometric information is proposed to improve the reliability of this task. The method uses a compact probabilistic representation of objects to fuse visual and geometric information for the association decision. The main contributions of this thesis are: 1) a stochastic representation of objects through non-linear dimensionality reduction; 2) a landmark recognition system using a visual and ranging sensors; 3) a data association algorithm combining appearance and position properties; 4) a real-time algorithm for detection and segmentation of natural objects from few training images and 5) a real-time place recognition system combining dimensionality reduction and Bayesian learning. The theoretical contributions of this thesis are demonstrated with a series of experiments in unstructured environments. In particular, the combination of recognition, representation and association algorithms is applied to the Simultaneous Localisation and Mapping problem (SLAM) to close large loops in outdoor trajectories, proving the benefits of the proposed methodology
Collective classification for labeling of places and objects in 2D and 3D range data
In this paper, we present an algorithm to identify types of places and objects from 2D and 3D laser range data obtained in indoor environments. Our approach is a combination of a collective classification method based on associative Markov networks together with an instance-based feature extraction using nearest neighbor. Additionally, we show how to select the best features needed to represent the objects and places, reducing the time needed for the learning and inference steps while maintaining high classification rates. Experimental results in real data demonstrate the effectiveness of our approach in indoor environments
Autonomous navigation for guide following in crowded indoor environments
The requirements for assisted living are rapidly changing as the number of elderly
patients over the age of 60 continues to increase. This rise places a high level of stress on
nurse practitioners who must care for more patients than they are capable. As this trend is
expected to continue, new technology will be required to help care for patients. Mobile
robots present an opportunity to help alleviate the stress on nurse practitioners by
monitoring and performing remedial tasks for elderly patients. In order to produce
mobile robots with the ability to perform these tasks, however, many challenges must be
overcome.
The hospital environment requires a high level of safety to prevent patient injury. Any
facility that uses mobile robots, therefore, must be able to ensure that no harm will come
to patients whilst in a care environment. This requires the robot to build a high level of
understanding about the environment and the people with close proximity to the robot.
Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders.
3D time-of-flight sensors have recently been introduced and provide dense 3D point
clouds of the environment at real-time frame rates. This provides mobile robots with
previously unavailable dense information in real-time. I investigate the use of time-of-flight
cameras for mobile robot navigation in crowded environments in this thesis. A
unified framework to allow the robot to follow a guide through an indoor environment
safely and efficiently is presented. Each component of the framework is analyzed in
detail, with real-world scenarios illustrating its practical use.
Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems
that must be overcome to receive consistent and accurate data. I propose a novel and
practical probabilistic framework to overcome many of the inherent problems in this
thesis. The framework fuses multiple depth maps with color information forming a
reliable and consistent view of the world. In order for the robot to interact with the
environment, contextual information is required. To this end, I propose a region-growing
segmentation algorithm to group points based on surface characteristics, surface normal
and surface curvature. The segmentation process creates a distinct set of surfaces,
however, only a limited amount of contextual information is available to allow for
interaction. Therefore, a novel classifier is proposed using spherical harmonics to
differentiate people from all other objects.
The added ability to identify people allows the robot to find potential candidates to
follow. However, for safe navigation, the robot must continuously track all visible
objects to obtain positional and velocity information. A multi-object tracking system is
investigated to track visible objects reliably using multiple cues, shape and color. The
tracking system allows the robot to react to the dynamic nature of people by building an
estimate of the motion flow. This flow provides the robot with the necessary information
to determine where and at what speeds it is safe to drive. In addition, a novel search
strategy is proposed to allow the robot to recover a guide who has left the field-of-view.
To achieve this, a search map is constructed with areas of the environment ranked
according to how likely they are to reveal the guideās true location. Then, the robot can
approach the most likely search area to recover the guide. Finally, all components
presented are joined to follow a guide through an indoor environment. The results
achieved demonstrate the efficacy of the proposed components
Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online fourāfive cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping,
navigation, and low-level scene segmentation. However, single data type maps are not enough
in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to
map on different levels the surrounding environment. It exploits the benefits of several data types,
counteracting the cons of each of the sensors. This research introduces several techniques to achieve
mapping and navigation through indoor environments. First, a scan matching algorithm based on
ICP with distance threshold association counter is used as a multi-objective-like fitness function.
Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A
global map is then built during SLAM, reducing the accumulated error and demonstrating better
results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in
2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different
heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding
occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition
system. Experiments are conducted in both simulated and real scenarios, proving the performance of
the proposed algorithms.This work was supported by the funding from HEROITEA: Heterogeneous Intelligent
Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish Ministerio
de Economia y Competitividad, RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation
Hub, S2018/NMT-4331, funded by āProgramas de Actividades I+D en la Comunidad de Madridā
and cofunded by Structural Funds of the EU.
We acknowledge the R&D&I project PLEC2021-007819 funded by MCIN/AEI/
10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and the Comunidad de
Madrid (Spain) under the multiannual agreement with Universidad Carlos III de Madrid (āExcelencia
para el Profesorado UniversitarioāāEPUC3M18) part of the fifth regional research plan 2016ā2020
Place Categorization and Semantic Mapping on a Mobile Robot
In this paper we focus on the challenging problem of place categorization and
semantic mapping on a robot without environment-specific training. Motivated by
their ongoing success in various visual recognition tasks, we build our system
upon a state-of-the-art convolutional network. We overcome its closed-set
limitations by complementing the network with a series of one-vs-all
classifiers that can learn to recognize new semantic classes online. Prior
domain knowledge is incorporated by embedding the classification system into a
Bayesian filter framework that also ensures temporal coherence. We evaluate the
classification accuracy of the system on a robot that maps a variety of places
on our campus in real-time. We show how semantic information can boost robotic
object detection performance and how the semantic map can be used to modulate
the robot's behaviour during navigation tasks. The system is made available to
the community as a ROS module
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- ā¦