703 research outputs found
Robotic manipulation of multiple objects as a POMDP
This paper investigates manipulation of multiple unknown objects in a crowded
environment. Because of incomplete knowledge due to unknown objects and
occlusions in visual observations, object observations are imperfect and action
success is uncertain, making planning challenging. We model the problem as a
partially observable Markov decision process (POMDP), which allows a general
reward based optimization objective and takes uncertainty in temporal evolution
and partial observations into account. In addition to occlusion dependent
observation and action success probabilities, our POMDP model also
automatically adapts object specific action success probabilities. To cope with
the changing system dynamics and performance constraints, we present a new
online POMDP method based on particle filtering that produces compact policies.
The approach is validated both in simulation and in physical experiments in a
scenario of moving dirty dishes into a dishwasher. The results indicate that:
1) a greedy heuristic manipulation approach is not sufficient, multi-object
manipulation requires multi-step POMDP planning, and 2) on-line planning is
beneficial since it allows the adaptation of the system dynamics model based on
actual experience
Learning to Represent Haptic Feedback for Partially-Observable Tasks
The sense of touch, being the earliest sensory system to develop in a human
body [1], plays a critical part of our daily interaction with the environment.
In order to successfully complete a task, many manipulation interactions
require incorporating haptic feedback. However, manually designing a feedback
mechanism can be extremely challenging. In this work, we consider manipulation
tasks that need to incorporate tactile sensor feedback in order to modify a
provided nominal plan. To incorporate partial observation, we present a new
framework that models the task as a partially observable Markov decision
process (POMDP) and learns an appropriate representation of haptic feedback
which can serve as the state for a POMDP model. The model, that is parametrized
by deep recurrent neural networks, utilizes variational Bayes methods to
optimize the approximate posterior. Finally, we build on deep Q-learning to be
able to select the optimal action in each state without access to a simulator.
We test our model on a PR2 robot for multiple tasks of turning a knob until it
clicks.Comment: IEEE International Conference on Robotics and Automation (ICRA), 201
Pick and Place Without Geometric Object Models
We propose a novel formulation of robotic pick and place as a deep
reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic
manipulation frame the problem in terms of low level states and actions, we
propose a more abstract formulation. In this formulation, actions are target
reach poses for the hand and states are a history of such reaches. We show this
approach can solve a challenging class of pick-place and regrasping problems
where the exact geometry of the objects to be handled is unknown. The only
information our method requires is: 1) the sensor perception available to the
robot at test time; 2) prior knowledge of the general class of objects for
which the system was trained. We evaluate our method using objects belonging to
two different categories, mugs and bottles, both in simulation and on real
hardware. Results show a major improvement relative to a shape primitives
baseline
Shared Autonomy via Hindsight Optimization
In shared autonomy, user input and robot autonomy are combined to control a
robot to achieve a goal. Often, the robot does not know a priori which goal the
user wants to achieve, and must both predict the user's intended goal, and
assist in achieving that goal. We formulate the problem of shared autonomy as a
Partially Observable Markov Decision Process with uncertainty over the user's
goal. We utilize maximum entropy inverse optimal control to estimate a
distribution over the user's goal based on the history of inputs. Ideally, the
robot assists the user by solving for an action which minimizes the expected
cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal
action is intractable, we use hindsight optimization to approximate the
solution. In a user study, we compare our method to a standard
predict-then-blend approach. We find that our method enables users to
accomplish tasks more quickly while utilizing less input. However, when asked
to rate each system, users were mixed in their assessment, citing a tradeoff
between maintaining control authority and accomplishing tasks quickly
Differentiable Algorithm Networks for Composable Robot Learning
This paper introduces the Differentiable Algorithm Network (DAN), a
composable architecture for robot learning systems. A DAN is composed of neural
network modules, each encoding a differentiable robot algorithm and an
associated model; and it is trained end-to-end from data. DAN combines the
strengths of model-driven modular system design and data-driven end-to-end
learning. The algorithms and models act as structural assumptions to reduce the
data requirements for learning; end-to-end learning allows the modules to adapt
to one another and compensate for imperfect models and algorithms, in order to
achieve the best overall system performance. We illustrate the DAN methodology
through a case study on a simulated robot system, which learns to navigate in
complex 3-D environments with only local visual observations and an image of a
partially correct 2-D floor map.Comment: RSS 2019 camera ready. Video is available at
https://youtu.be/4jcYlTSJF4
- …