22 research outputs found
Rearrangement-Based Manipulation via Kinodynamic Planning and Dynamic Planning Horizons
Robot manipulation in cluttered environments often requires complex and
sequential rearrangement of multiple objects in order to achieve the desired
reconfiguration of the target objects. Due to the sophisticated physical
interactions involved in such scenarios, rearrangement-based manipulation is
still limited to a small range of tasks and is especially vulnerable to
physical uncertainties and perception noise. This paper presents a planning
framework that leverages the efficiency of sampling-based planning approaches,
and closes the manipulation loop by dynamically controlling the planning
horizon. Our approach interleaves planning and execution to progressively
approach the manipulation goal while correcting any errors or path deviations
along the process. Meanwhile, our framework allows the definition of
manipulation goals without requiring explicit goal configurations, enabling the
robot to flexibly interact with all objects to facilitate the manipulation of
the target ones. With extensive experiments both in simulation and on a real
robot, we evaluate our framework on three manipulation tasks in cluttered
environments: grasping, relocating, and sorting. In comparison with two
baseline approaches, we show that our framework can significantly improve
planning efficiency, robustness against physical uncertainties, and task
success rate under limited time budgets.Comment: Accepted for publication in the Proceedings of the 2022 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2022
Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning
Rearranging objects on a tabletop surface by means of nonprehensile
manipulation is a task which requires skillful interaction with the physical
world. Usually, this is achieved by precisely modeling physical properties of
the objects, robot, and the environment for explicit planning. In contrast, as
explicitly modeling the physical environment is not always feasible and
involves various uncertainties, we learn a nonprehensile rearrangement strategy
with deep reinforcement learning based on only visual feedback. For this, we
model the task with rewards and train a deep Q-network. Our potential
field-based heuristic exploration strategy reduces the amount of collisions
which lead to suboptimal outcomes and we actively balance the training set to
avoid bias towards poor examples. Our training process leads to quicker
learning and better performance on the task as compared to uniform exploration
and standard experience replay. We demonstrate empirical evidence from
simulation that our method leads to a success rate of 85%, show that our system
can cope with sudden changes of the environment, and compare our performance
with human level performance.Comment: 2018 International Conference on Robotics and Automatio
Persistent Homology Guided Monte-Carlo Tree Search for Effective Non-Prehensile Manipulation
Performing object retrieval tasks in messy real-world workspaces involves the
challenges of \emph{uncertainty} and \emph{clutter}. One option is to solve
retrieval problems via a sequence of prehensile pick-n-place operations, which
can be computationally expensive to compute in highly-cluttered scenarios and
also inefficient to execute. The proposed framework selects the option of
performing non-prehensile actions, such as pushing, to clean a cluttered
workspace to allow a robotic arm to retrieve a target object. Non-prehensile
actions, allow to interact simultaneously with multiple objects, which can
speed up execution. At the same time, they can significantly increase
uncertainty as it is not easy to accurately estimate the outcome of a pushing
operation in clutter. The proposed framework integrates topological tools and
Monte-Carlo tree search to achieve effective and robust pushing for object
retrieval problems. In particular, it proposes using persistent homology to
automatically identify manageable clustering of blocking objects in the
workspace without the need for manually adjusting hyper-parameters.
Furthermore, MCTS uses this information to explore feasible actions to push
groups of objects together, aiming to minimize the number of pushing actions
needed to clear the path to the target object. Real-world experiments using a
Baxter robot, which involves some noise in actuation, show that the proposed
framework achieves a higher success rate in solving retrieval tasks in dense
clutter compared to state-of-the-art alternatives. Moreover, it produces
high-quality solutions with a small number of pushing actions improving the
overall execution time. More critically, it is robust enough that it allows to
plan the sequence of actions offline and then execute them reliably online with
Baxter
Real-Time Online Re-Planning for Grasping Under Clutter and Uncertainty
We consider the problem of grasping in clutter. While there have been motion
planners developed to address this problem in recent years, these planners are
mostly tailored for open-loop execution. Open-loop execution in this domain,
however, is likely to fail, since it is not possible to model the dynamics of
the multi-body multi-contact physical system with enough accuracy, neither is
it reasonable to expect robots to know the exact physical properties of
objects, such as frictional, inertial, and geometrical. Therefore, we propose
an online re-planning approach for grasping through clutter. The main challenge
is the long planning times this domain requires, which makes fast re-planning
and fluent execution difficult to realize. In order to address this, we propose
an easily parallelizable stochastic trajectory optimization based algorithm
that generates a sequence of optimal controls. We show that by running this
optimizer only for a small number of iterations, it is possible to perform real
time re-planning cycles to achieve reactive manipulation under clutter and
uncertainty.Comment: Published as a conference paper in IEEE Humanoids 201