45 research outputs found
Active vision for dexterous grasping of novel objects
How should a robot direct active vision so as to ensure reliable grasping? We
answer this question for the case of dexterous grasping of unfamiliar objects.
By dexterous grasping we simply mean grasping by any hand with more than two
fingers, such that the robot has some choice about where to place each finger.
Such grasps typically fail in one of two ways, either unmodeled objects in the
scene cause collisions or object reconstruction is insufficient to ensure that
the grasp points provide a stable force closure. These problems can be solved
more easily if active sensing is guided by the anticipated actions. Our
approach has three stages. First, we take a single view and generate candidate
grasps from the resulting partial object reconstruction. Second, we drive the
active vision approach to maximise surface reconstruction quality around the
planned contact points. During this phase, the anticipated grasp is continually
refined. Third, we direct gaze to improve the safety of the planned reach to
grasp trajectory. We show, on a dexterous manipulator with a camera on the
wrist, that our approach (80.4% success rate) outperforms a randomised
algorithm (64.3% success rate).Comment: IROS 2016. Supplementary video: https://youtu.be/uBSOO6tMzw
Uncertainty Averse Pushing with Model Predictive Path Integral Control
Planning robust robot manipulation requires good forward models that enable
robust plans to be found. This work shows how to achieve this using a forward
model learned from robot data to plan push manipulations. We explore learning
methods (Gaussian Process Regression, and an Ensemble of Mixture Density
Networks) that give estimates of the uncertainty in their predictions. These
learned models are utilised by a model predictive path integral (MPPI)
controller to plan how to push the box to a goal location. The planner avoids
regions of high predictive uncertainty in the forward model. This includes both
inherent uncertainty in dynamics, and meta uncertainty due to limited data.
Thus, pushing tasks are completed in a robust fashion with respect to estimated
uncertainty in the forward model and without the need of differentiable cost
functions. We demonstrate the method on a real robot, and show that learning
can outperform physics simulation. Using simulation, we also show the ability
to plan uncertainty averse paths.Comment: Humanoids 2017. Supplementary video: https://youtu.be/LjYruxwxkP
Task-relevant grasp selection: A joint solution to planning grasps and manipulative motion trajectories
This paper addresses the problem of jointly planning both grasps and subsequent manipulative actions. Previously, these two problems have typically been studied in isolation, however joint reasoning is essential to enable robots to complete real manipulative tasks. In this paper, the two problems are addressed jointly and a solution that takes both into consideration is proposed. To do so, a manipulation capability index is defined, which is a function of both the task execution waypoints and the object grasping contact points. We build on recent state-of-the-art grasp-learning methods, to show how this index can be combined with a likelihood function computed by a probabilistic model of grasp selection, enabling the planning of grasps which have a high likelihood of being stable, but which also maximise the robot's capability to deliver a desired post-grasp task trajectory. We also show how this paradigm can be extended, from a single arm and hand, to enable efficient grasping and manipulation with a bi-manual robot. We demonstrate the effectiveness of the approach using experiments on a simulated as well as a real robot
Prediction learning in robotic manipulation
This thesis addresses an important problem in robotic manipulation, which is the ability to predict how objects behave under manipulative actions. This ability is useful for planning of object manipulations. Physics simulators can be used to do this, but they model many kinds of object interactions poorly, and unless there is a precise description of an object’s properties their predictions may be unreliable. An alternative is to learn a model for objects by interacting with them. This thesis specifically addresses the problem of learning to predict the interactions of rigid bodies in a probabilistic framework, and demonstrates results in the domain of robotic push manipulation. During training, a robotic manipulator applies pushes to objects and learns to predict their resulting motions. The learning does not make explicit use of physics knowledge, nor is it restricted to domains with any particular physical properties. The prediction problem is posed in terms of estimating probability densities over the possible rigid body transformations of an entire object as well as parts of an object under a known action. Density estimation is useful in that it enables predictions with multimodal outcomes, but it also enables compromise predictions for multiple combined expert predictors in a product of experts architecture. It is shown that a product of experts architecture can be learned and that it can produce generalization with respect to novel actions and object shapes, outperforming in most cases an approach based on regression. An alternative, non-learning, method of prediction is also presented, in which a simplified physics approach uses the minimum energy principle together with a particle-based representation of the object. A probabilistic formulation enables this simplified physics predictor to be combined with learned predictors in a product of experts. The thesis experimentally compares the performance of product of densities, regression, and simplified physics approaches. Performance is evaluated through a combination of virtual experiments in a physics simulator, and real experiments with a 5-axis arm equipped with a simple, rigid finger and a vision system used for tracking the manipulated object