536 research outputs found
Learning Transferable Push Manipulation Skills in Novel Contexts
This paper is concerned with learning transferable forward models for push
manipulation that can be applying to novel contexts and how to improve the
quality of prediction when critical information is available. We propose to
learn a parametric internal model for push interactions that, similar for
humans, enables a robot to predict the outcome of a physical interaction even
in novel contexts. Given a desired push action, humans are capable to identify
where to place their finger on a new object so to produce a predictable motion
of the object. We achieve the same behaviour by factorising the learning into
two parts. First, we learn a set of local contact models to represent the
geometrical relations between the robot pusher, the object, and the
environment. Then we learn a set of parametric local motion models to predict
how these contacts change throughout a push. The set of contact and motion
models represent our internal model. By adjusting the shapes of the
distributions over the physical parameters, we modify the internal model's
response. Uniform distributions yield to coarse estimates when no information
is available about the novel context (i.e. unbiased predictor). A more accurate
predictor can be learned for a specific environment/object pair (e.g. low
friction/high mass), i.e. biased predictor. The effectiveness of our approach
is shown in a simulated environment in which a Pioneer 3-DX robot needs to
predict a push outcome for a novel object, and we provide a proof of concept on
a real robot. We train on 2 objects (a cube and a cylinder) for a total of
24,000 pushes in various conditions, and test on 6 objects encompassing a
variety of shapes, sizes, and physical parameters for a total of 14,400
predicted push outcomes. Our results show that both biased and unbiased
predictors can reliably produce predictions in line with the outcomes of a
carefully tuned physics simulator.Comment: This work has been submitted to IEEE Transactions on Robotics journal
in July 202
Object and Relation Centric Representations for Push Effect Prediction
Pushing is an essential non-prehensile manipulation skill used for tasks
ranging from pre-grasp manipulation to scene rearrangement, reasoning about
object relations in the scene, and thus pushing actions have been widely
studied in robotics. The effective use of pushing actions often requires an
understanding of the dynamics of the manipulated objects and adaptation to the
discrepancies between prediction and reality. For this reason, effect
prediction and parameter estimation with pushing actions have been heavily
investigated in the literature. However, current approaches are limited because
they either model systems with a fixed number of objects or use image-based
representations whose outputs are not very interpretable and quickly accumulate
errors. In this paper, we propose a graph neural network based framework for
effect prediction and parameter estimation of pushing actions by modeling
object relations based on contacts or articulations. Our framework is validated
both in real and simulated environments containing different shaped multi-part
objects connected via different types of joints and objects with different
masses. Our approach enables the robot to predict and adapt the effect of a
pushing action as it observes the scene. Further, we demonstrate 6D effect
prediction in the lever-up action in the context of robot-based hard-disk
disassembly.Comment: Project Page: https://fzaero.github.io/push_learning
Legged Robots for Object Manipulation: A Review
Legged robots can have a unique role in manipulating objects in dynamic,
human-centric, or otherwise inaccessible environments. Although most legged
robotics research to date typically focuses on traversing these challenging
environments, many legged platform demonstrations have also included "moving an
object" as a way of doing tangible work. Legged robots can be designed to
manipulate a particular type of object (e.g., a cardboard box, a soccer ball,
or a larger piece of furniture), by themselves or collaboratively. The
objective of this review is to collect and learn from these examples, to both
organize the work done so far in the community and highlight interesting open
avenues for future work. This review categorizes existing works into four main
manipulation methods: object interactions without grasping, manipulation with
walking legs, dedicated non-locomotive arms, and legged teams. Each method has
different design and autonomy features, which are illustrated by available
examples in the literature. Based on a few simplifying assumptions, we further
provide quantitative comparisons for the range of possible relative sizes of
the manipulated object with respect to the robot. Taken together, these
examples suggest new directions for research in legged robot manipulation, such
as multifunctional limbs, terrain modeling, or learning-based control, to
support a number of new deployments in challenging indoor/outdoor scenarios in
warehouses/construction sites, preserved natural areas, and especially for home
robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical
Engineerin
- …