4 research outputs found
Manipulation Planning Among Movable Obstacles Using Physics-Based Adaptive Motion Primitives
Robot manipulation in cluttered scenes often requires contact-rich
interactions with objects. It can be more economical to interact via
non-prehensile actions, for example, push through other objects to get to the
desired grasp pose, instead of deliberate prehensile rearrangement of the
scene. For each object in a scene, depending on its properties, the robot may
or may not be allowed to make contact with, tilt, or topple it. To ensure that
these constraints are satisfied during non-prehensile interactions, a planner
can query a physics-based simulator to evaluate the complex multi-body
interactions caused by robot actions. Unfortunately, it is infeasible to query
the simulator for thousands of actions that need to be evaluated in a typical
planning problem as each simulation is time-consuming. In this work, we show
that (i) manipulation tasks (specifically pick-and-place style tasks from a
tabletop or a refrigerator) can often be solved by restricting robot-object
interactions to adaptive motion primitives in a plan, (ii) these actions can be
incorporated as subgoals within a multi-heuristic search framework, and (iii)
limiting interactions to these actions can help reduce the time spent querying
the simulator during planning by up to 40x in comparison to baseline
algorithms. Our algorithm is evaluated in simulation and in the real-world on a
PR2 robot using PyBullet as our physics-based simulator. Supplementary video:
\url{https://youtu.be/ABQc7JbeJPM}.Comment: Under review for the IEEE Robotics and Automation Letters (RA-L)
journal with conference presentation option at the 2021 International
Conference on Robotics and Automation (ICRA). This work has been submitted to
the IEEE for possible publication. Copyright may be transferred without
notice, after which this version may no longer be accessibl
Driving in Dense Traffic with Model-Free Reinforcement Learning
Traditional planning and control methods could fail to find a feasible
trajectory for an autonomous vehicle to execute amongst dense traffic on roads.
This is because the obstacle-free volume in spacetime is very small in these
scenarios for the vehicle to drive through. However, that does not mean the
task is infeasible since human drivers are known to be able to drive amongst
dense traffic by leveraging the cooperativeness of other drivers to open a gap.
The traditional methods fail to take into account the fact that the actions
taken by an agent affect the behaviour of other vehicles on the road. In this
work, we rely on the ability of deep reinforcement learning to implicitly model
such interactions and learn a continuous control policy over the action space
of an autonomous vehicle. The application we consider requires our agent to
negotiate and open a gap in the road in order to successfully merge or change
lanes. Our policy learns to repeatedly probe into the target road lane while
trying to find a safe spot to move in to. We compare against two
model-predictive control-based algorithms and show that our policy outperforms
them in simulation.Comment: Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA), 2020. Updated Github repository link