3 research outputs found
Model-free vision-based shaping of deformable plastic materials
We address the problem of shaping deformable plastic materials using
non-prehensile actions. Shaping plastic objects is challenging, since they are
difficult to model and to track visually. We study this problem, by using
kinetic sand, a plastic toy material which mimics the physical properties of
wet sand. Inspired by a pilot study where humans shape kinetic sand, we define
two types of actions: \textit{pushing} the material from the sides and
\textit{tapping} from above. The chosen actions are executed with a robotic arm
using image-based visual servoing. From the current and desired view of the
material, we define states based on visual features such as the outer contour
shape and the pixel luminosity values. These are mapped to actions, which are
repeated iteratively to reduce the image error until convergence is reached.
For pushing, we propose three methods for mapping the visual state to an
action. These include heuristic methods and a neural network, trained from
human actions. We show that it is possible to obtain simple shapes with the
kinetic sand, without explicitly modeling the material. Our approach is limited
in the types of shapes it can achieve. A richer set of action types and
multi-step reasoning is needed to achieve more sophisticated shapes.Comment: Accepted to The International Journal of Robotics Research (IJRR
Vision-based Manipulation of Deformable and Rigid Objects Using Subspace Projections of 2D Contours
This paper proposes a unified vision-based manipulation framework using image
contours of deformable/rigid objects. Instead of using human-defined cues, the
robot automatically learns the features from processed vision data. Our method
simultaneously generates---from the same data---both, visual features and the
interaction matrix that relates them to the robot control inputs. Extraction of
the feature vector and control commands is done online and adaptively, with
little data for initialization. The method allows the robot to manipulate an
object without knowing whether it is rigid or deformable. To validate our
approach, we conduct numerical simulations and experiments with both deformable
and rigid objects
Model-free vision-based shaping of deformable plastic materials
We address the problem of shaping deformable plastic materials using non-prehensile actions. Shaping plastic objects is challenging, because they are difficult to model and to track visually. We study this problem, by using kinetic sand, a plastic toy material that mimics the physical properties of wet sand. Inspired by a pilot study where humans shape kinetic sand, we define two types of actions: pushing the material from the sides and tapping from above. The chosen actions are executed with a robotic arm using image-based visual servoing. From the current and desired view of the material, we define states based on visual features such as the outer contour shape and the pixel luminosity values. These are mapped to actions, which are repeated iteratively to reduce the image error until convergence is reached. For pushing, we propose three methods for mapping the visual state to an action. These include heuristic methods and a neural network, trained from human actions. We show that it is possible to obtain simple shapes with the kinetic sand, without explicitly modeling the material. Our approach is limited in the types of shapes it can achieve. A richer set of action types and multi-step reasoning is needed to achieve more sophisticated shapes