37,495 research outputs found
Value Iteration Networks on Multiple Levels of Abstraction
Learning-based methods are promising to plan robot motion without performing
extensive search, which is needed by many non-learning approaches. Recently,
Value Iteration Networks (VINs) received much interest since---in contrast to
standard CNN-based architectures---they learn goal-directed behaviors which
generalize well to unseen domains. However, VINs are restricted to small and
low-dimensional domains, limiting their applicability to real-world planning
problems.
To address this issue, we propose to extend VINs to representations with
multiple levels of abstraction. While the vicinity of the robot is represented
in sufficient detail, the representation gets spatially coarser with increasing
distance from the robot. The information loss caused by the decreasing
resolution is compensated by increasing the number of features representing a
cell. We show that our approach is capable of solving significantly larger 2D
grid world planning tasks than the original VIN implementation. In contrast to
a multiresolution coarse-to-fine VIN implementation which does not employ
additional descriptive features, our approach is capable of solving challenging
environments, which demonstrates that the proposed method learns to encode
useful information in the additional features. As an application for solving
real-world planning tasks, we successfully employ our method to plan
omnidirectional driving for a search-and-rescue robot in cluttered terrain
Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory
The Pepper robot has become a widely recognised face for the perceived
potential of social robots to enter our homes and businesses. However, to date,
commercial and research applications of the Pepper have been largely restricted
to roles in which the robot is able to remain stationary. This restriction is
the result of a number of technical limitations, including limited sensing
capabilities, and have as a result, reduced the number of roles in which use of
the robot can be explored. In this paper, we present our approach to solving
these problems, with the intention of opening up new research applications for
the robot. To demonstrate the applicability of our approach, we have framed
this work within the context of providing interactive tours of an open-plan
robotics laboratory.Comment: 8 pages, Submitted to IROS 2018 (2018 IEEE/RSJ International
Conference on Intelligent Robots and Systems), see
https://bitbucket.org/pepper_qut/ for access to the softwar
Autonomous Robot Navigation with Rich Information Mapping in Nuclear Storage Environments
This paper presents our approach to develop a method for an unmanned ground
vehicle (UGV) to perform inspection tasks in nuclear environments using rich
information maps. To reduce inspectors' exposure to elevated radiation levels,
an autonomous navigation framework for the UGV has been developed to perform
routine inspections such as counting containers, recording their ID tags and
performing gamma measurements on some of them. In order to achieve autonomy, a
rich information map is generated which includes not only the 2D global cost
map consisting of obstacle locations for path planning, but also the location
and orientation information for the objects of interest from the inspector's
perspective. The UGV's autonomy framework utilizes this information to
prioritize locations to navigate to perform the inspections. In this paper, we
present our method of generating this rich information map, originally
developed to meet the requirements of the International Atomic Energy Agency
(IAEA) Robotics Challenge. We demonstrate the performance of our method in a
simulated testbed environment containing uranium hexafluoride (UF6) storage
container mock ups
Deep Forward and Inverse Perceptual Models for Tracking and Prediction
We consider the problems of learning forward models that map state to
high-dimensional images and inverse models that map high-dimensional images to
state in robotics. Specifically, we present a perceptual model for generating
video frames from state with deep networks, and provide a framework for its use
in tracking and prediction tasks. We show that our proposed model greatly
outperforms standard deconvolutional methods and GANs for image generation,
producing clear, photo-realistic images. We also develop a convolutional neural
network model for state estimation and compare the result to an Extended Kalman
Filter to estimate robot trajectories. We validate all models on a real robotic
system.Comment: 8 pages, International Conference on Robotics and Automation (ICRA)
201
Learning Models for Following Natural Language Directions in Unknown Environments
Natural language offers an intuitive and flexible means for humans to
communicate with the robots that we will increasingly work alongside in our
homes and workplaces. Recent advancements have given rise to robots that are
able to interpret natural language manipulation and navigation commands, but
these methods require a prior map of the robot's environment. In this paper, we
propose a novel learning framework that enables robots to successfully follow
natural language route directions without any previous knowledge of the
environment. The algorithm utilizes spatial and semantic information that the
human conveys through the command to learn a distribution over the metric and
semantic properties of spatially extended environments. Our method uses this
distribution in place of the latent world model and interprets the natural
language instruction as a distribution over the intended behavior. A novel
belief space planner reasons directly over the map and behavior distributions
to solve for a policy using imitation learning. We evaluate our framework on a
voice-commandable wheelchair. The results demonstrate that by learning and
performing inference over a latent environment model, the algorithm is able to
successfully follow natural language route directions within novel, extended
environments.Comment: ICRA 201
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings
in cognitive psychology, our model is composed of layers representing maps at diļ¬erent levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
- ā¦