11 research outputs found
Intelligent learning for deformable object manipulation
©1999 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 1999 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Monterey Bay, CA, November 1999.DOI: 10.1109/CIRA.1999.809935The majority of manipulation systems are designed with the assumption that the objects’being handled are rigid and do not deform when grasped. This paper addresses the problem of robotic grasping and manipulation of 3-D deformable objects, such as rubber balls or bags filled with sand.‘ Specifically, we have developed a generalized learning algorithm for handling of 3-D deformable objects in which prior knowledge of object attributes is not required and thus it can be applied to a large class of object types. Our methodology relies on the implementation of two main tasks. Our first task is to calculate deformation characteristics for a non-rigid object represented by a physically-based model. Using nonlinear partial differential equations, we model the particle motion of the deformable object in order to calculate the deformation characteristics. For our second task, we must calculate the minimum force required to successfully lift the deformable object. This minimum lifting force can be learned using a technique called ‘iterative lifting’. Once the deformation characteristics and the associated lifting force term are determined, they are used to train a neural network for extracting the minimum force required for subsequent deformable object manipulation tasks. Our developed algorithm is validated with two sets of experiments. The first experimental results are derived from the implementation of the algorithm in a simulated environment. The second set involves a physical implementation of the technique whose outcome is compared with the simulation results to test the real world validity of the developed methodology
VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable Objects
Deformable objects manipulation can benefit from representations that
seamlessly integrate vision and touch while handling occlusions. In this work,
we present a novel approach for, and real-world demonstration of, multimodal
visuo-tactile state-estimation and dynamics prediction for deformable objects.
Our approach, VIRDO++, builds on recent progress in multimodal neural implicit
representations for deformable object state-estimation [1] via a new
formulation for deformation dynamics and a complementary state-estimation
algorithm that (i) maintains a belief over deformations, and (ii) enables
practical real-world application by removing the need for privileged contact
information. In the context of two real-world robotic tasks, we show:(i)
high-fidelity cross-modal state-estimation and prediction of deformable objects
from partial visuo-tactile feedback, and (ii) generalization to unseen objects
and contact formations
Model-free vision-based shaping of deformable plastic materials
We address the problem of shaping deformable plastic materials using
non-prehensile actions. Shaping plastic objects is challenging, since they are
difficult to model and to track visually. We study this problem, by using
kinetic sand, a plastic toy material which mimics the physical properties of
wet sand. Inspired by a pilot study where humans shape kinetic sand, we define
two types of actions: \textit{pushing} the material from the sides and
\textit{tapping} from above. The chosen actions are executed with a robotic arm
using image-based visual servoing. From the current and desired view of the
material, we define states based on visual features such as the outer contour
shape and the pixel luminosity values. These are mapped to actions, which are
repeated iteratively to reduce the image error until convergence is reached.
For pushing, we propose three methods for mapping the visual state to an
action. These include heuristic methods and a neural network, trained from
human actions. We show that it is possible to obtain simple shapes with the
kinetic sand, without explicitly modeling the material. Our approach is limited
in the types of shapes it can achieve. A richer set of action types and
multi-step reasoning is needed to achieve more sophisticated shapes.Comment: Accepted to The International Journal of Robotics Research (IJRR
Object grasping and safe manipulation using friction-based sensing.
This project provides a solution for slippage prevention in industrial robotic grippers for the purpose of safe object manipulation. Slippage sensing is performed using novel friction-based sensors, with customisable slippage sensitivity and complemented by an effective slippage prediction strategy. The outcome is a reliable and affordable slippage prevention technology