1,171 research outputs found
Elastic Context: Encoding Elasticity for Data-driven Models of Textiles
Physical interaction with textiles, such as assistive dressing, relies on
advanced dextreous capabilities. The underlying complexity in textile behavior
when being pulled and stretched, is due to both the yarn material properties
and the textile construction technique. Today, there are no commonly adopted
and annotated datasets on which the various interaction or property
identification methods are assessed. One important property that affects the
interaction is material elasticity that results from both the yarn material and
construction technique: these two are intertwined and, if not known a-priori,
almost impossible to identify through sensing commonly available on robotic
platforms. We introduce Elastic Context (EC), a concept that integrates various
properties that affect elastic behavior, to enable a more effective physical
interaction with textiles. The definition of EC relies on stress/strain curves
commonly used in textile engineering, which we reformulated for robotic
applications. We employ EC using Graph Neural Network (GNN) to learn
generalized elastic behaviors of textiles. Furthermore, we explore the effect
the dimension of the EC has on accurate force modeling of non-linear real-world
elastic behaviors, highlighting the challenges of current robotic setups to
sense textile properties
GenORM: Generalizable One-shot Rope Manipulation with Parameter-Aware Policy
Due to the inherent uncertainty in their deformability during motion,
previous methods in rope manipulation often require hundreds of real-world
demonstrations to train a manipulation policy for each rope, even for simple
tasks such as rope goal reaching, which hinder their applications in our
ever-changing world. To address this issue, we introduce GenORM, a framework
that allows the manipulation policy to handle different deformable ropes with a
single real-world demonstration. To achieve this, we augment the policy by
conditioning it on deformable rope parameters and training it with a diverse
range of simulated deformable ropes so that the policy can adjust actions based
on different rope parameters. At the time of inference, given a new rope,
GenORM estimates the deformable rope parameters by minimizing the disparity
between the grid density of point clouds of real-world demonstrations and
simulations. With the help of a differentiable physics simulator, we require
only a single real-world demonstration. Empirical validations on both simulated
and real-world rope manipulation setups clearly show that our method can
manipulate different ropes with a single demonstration and significantly
outperforms the baseline in both environments (62% improvement in in-domain
ropes, and 15% improvement in out-of-distribution ropes in simulation, 26%
improvement in real-world), demonstrating the effectiveness of our approach in
one-shot rope manipulation
EDO-Net: Learning Elastic Properties of Deformable Objects from Graph Dynamics
We study the problem of learning graph dynamics of deformable objects which
generalize to unknown physical properties. In particular, we leverage a latent
representation of elastic physical properties of cloth-like deformable objects
which we explore through a pulling interaction. We propose EDO-Net (Elastic
Deformable Object - Net), a model trained in a self-supervised fashion on a
large variety of samples with different elastic properties. EDO-Net jointly
learns an adaptation module, responsible for extracting a latent representation
of the physical properties of the object, and a forward-dynamics module, which
leverages the latent representation to predict future states of cloth-like
objects, represented as graphs. We evaluate EDO-Net both in simulation and real
world, assessing its capabilities of: 1) generalizing to unknown physical
properties of cloth-like deformable objects, 2) transferring the learned
representation to new downstream tasks
Continuous Perception for Classifying Shapes and Weights of Garmentsfor Robotic Vision Applications
We present an approach to continuous perception for robotic laundry tasks.
Our assumption is that the visual prediction of a garment's shapes and weights
is possible via a neural network that learns the dynamic changes of garments
from video sequences. Continuous perception is leveraged during training by
inputting consecutive frames, of which the network learns how a garment
deforms. To evaluate our hypothesis, we captured a dataset of 40K RGB and 40K
depth video sequences while a garment is being manipulated. We also conducted
ablation studies to understand whether the neural network learns the physical
and dynamic properties of garments. Our findings suggest that a modified
AlexNet-LSTM architecture has the best classification performance for the
garment's shape and weights. To further provide evidence that continuous
perception facilitates the prediction of the garment's shapes and weights, we
evaluated our network on unseen video sequences and computed the 'Moving
Average' over a sequence of predictions. We found that our network has a
classification accuracy of 48% and 60% for shapes and weights of garments,
respectively.Comment: Accepted by the 17th International Conference on Computer Vision
Theory and Application
- …