7,548 research outputs found
Deep Network Uncertainty Maps for Indoor Navigation
Most mobile robots for indoor use rely on 2D laser scanners for localization,
mapping and navigation. These sensors, however, cannot detect transparent
surfaces or measure the full occupancy of complex objects such as tables. Deep
Neural Networks have recently been proposed to overcome this limitation by
learning to estimate object occupancy. These estimates are nevertheless subject
to uncertainty, making the evaluation of their confidence an important issue
for these measures to be useful for autonomous navigation and mapping. In this
work we approach the problem from two sides. First we discuss uncertainty
estimation in deep models, proposing a solution based on a fully convolutional
neural network. The proposed architecture is not restricted by the assumption
that the uncertainty follows a Gaussian model, as in the case of many popular
solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout.
We present results showing that uncertainty over obstacle distances is actually
better modeled with a Laplace distribution. Then, we propose a novel approach
to build maps based on Deep Neural Network uncertainty models. In particular,
we present an algorithm to build a map that includes information over obstacle
distance estimates while taking into account the level of uncertainty in each
estimate. We show how the constructed map can be used to increase global
navigation safety by planning trajectories which avoid areas of high
uncertainty, enabling higher autonomy for mobile robots in indoor settings.Comment: Accepted for publication in "2019 IEEE-RAS International Conference
on Humanoid Robots (Humanoids)
Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots
This paper shows and evaluates a novel approach to integrate a non-invasive
Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to
mentally drive a telepresence robot. Controlling a mobile device by using human
brain signals might improve the quality of life of people suffering from severe
physical disabilities or elderly people who cannot move anymore. Thus, the BCI
user is able to actively interact with relatives and friends located in
different rooms thanks to a video streaming connection to the robot. To
facilitate the control of the robot via BCI, we explore new ROS-based
algorithms for navigation and obstacle avoidance, making the system safer and
more reliable. In this regard, the robot can exploit two maps of the
environment, one for localization and one for navigation, and both can be used
also by the BCI user to watch the position of the robot while it is moving. As
demonstrated by the experimental results, the user's cognitive workload is
reduced, decreasing the number of commands necessary to complete the task and
helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference
on Robotics and Automatio
DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range Data
We introduce the DROW detector, a deep learning based detector for 2D range
data. Laser scanners are lighting invariant, provide accurate range data, and
typically cover a large field of view, making them interesting sensors for
robotics applications. So far, research on detection in laser range data has
been dominated by hand-crafted features and boosted classifiers, potentially
losing performance due to suboptimal design choices. We propose a Convolutional
Neural Network (CNN) based detector for this task. We show how to effectively
apply CNNs for detection in 2D range data, and propose a depth preprocessing
step and voting scheme that significantly improve CNN performance. We
demonstrate our approach on wheelchairs and walkers, obtaining state of the art
detection results. Apart from the training data, none of our design choices
limits the detector to these two classes, though. We provide a ROS node for our
detector and release our dataset containing 464k laser scans, out of which 24k
were annotated.Comment: Lucas Beyer and Alexander Hermans contributed equall
RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation
This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously
navigate through, identify, and reach areas of interest; and there recognize,
localize, and manipulate work tools to perform complex manipulation tasks. The
proposed contribution includes a modular software architecture where each
module solves specific sub-tasks and that can be easily enlarged to satisfy new
requirements. Included indoor and outdoor tests demonstrate the capability of
the proposed system to autonomously detect a target object (a panel) and
precisely dock in front of it while avoiding obstacles. They show it can
autonomously recognize and manipulate target work tools (i.e., wrenches and
valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve
stem). A specific case study is described where the proposed modular
architecture lets easy switch to a semi-teleoperated mode. The paper
exhaustively describes description of both the hardware and software setup of
RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International
Robotics Challenge, and the lessons we learned when participating at this
competition, where we ranked third in the Gran Challenge in collaboration with
the Czech Technical University in Prague, the University of Pennsylvania, and
the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics,
published by Taylor & Franci
SkiMap: An Efficient Mapping Framework for Robot Navigation
We present a novel mapping framework for robot navigation which features a
multi-level querying system capable to obtain rapidly representations as
diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These
are inherently embedded into a memory and time efficient core data structure
organized as a Tree of SkipLists. Compared to the well-known Octree
representation, our approach exhibits a better time efficiency, thanks to its
simple and highly parallelizable computational structure, and a similar memory
footprint when mapping large workspaces. Peculiarly within the realm of mapping
for robot navigation, our framework supports realtime erosion and
re-integration of measurements upon reception of optimized poses from the
sensor tracker, so as to improve continuously the accuracy of the map.Comment: Accepted by International Conference on Robotics and Automation
(ICRA) 2017. This is the submitted version. The final published version may
be slightly differen
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality
We address the problem of interactively controlling the workspace of a mobile
robot to ensure a human-aware navigation. This is especially of relevance for
non-expert users living in human-robot shared spaces, e.g. home environments,
since they want to keep the control of their mobile robots, such as vacuum
cleaning or companion robots. Therefore, we introduce virtual borders that are
respected by a robot while performing its tasks. For this purpose, we employ a
RGB-D Google Tango tablet as human-robot interface in combination with an
augmented reality application to flexibly define virtual borders. We evaluated
our system with 15 non-expert users concerning accuracy, teaching time and
correctness and compared the results with other baseline methods based on
visual markers and a laser pointer. The experimental results show that our
method features an equally high accuracy while reducing the teaching time
significantly compared to the baseline methods. This holds for different border
lengths, shapes and variations in the teaching process. Finally, we
demonstrated the correctness of the approach, i.e. the mobile robot changes its
navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR
- …