655 research outputs found
VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable Objects
Deformable objects manipulation can benefit from representations that
seamlessly integrate vision and touch while handling occlusions. In this work,
we present a novel approach for, and real-world demonstration of, multimodal
visuo-tactile state-estimation and dynamics prediction for deformable objects.
Our approach, VIRDO++, builds on recent progress in multimodal neural implicit
representations for deformable object state-estimation [1] via a new
formulation for deformation dynamics and a complementary state-estimation
algorithm that (i) maintains a belief over deformations, and (ii) enables
practical real-world application by removing the need for privileged contact
information. In the context of two real-world robotic tasks, we show:(i)
high-fidelity cross-modal state-estimation and prediction of deformable objects
from partial visuo-tactile feedback, and (ii) generalization to unseen objects
and contact formations
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile
feedback. Most recent robotic grasping work, however, has been based only on
visual input, and thus cannot easily benefit from feedback after initiating
contact. In this paper, we investigate how a robot can learn to use tactile
information to iteratively and efficiently adjust its grasp. To this end, we
propose an end-to-end action-conditional model that learns regrasping policies
from raw visuo-tactile data. This model -- a deep, multimodal convolutional
network -- predicts the outcome of a candidate grasp adjustment, and then
executes a grasp by iteratively selecting the most promising actions. Our
approach requires neither calibration of the tactile sensors, nor any
analytical modeling of contact forces, thus reducing the engineering effort
required to obtain efficient grasping policies. We train our model with data
from about 6,450 grasping trials on a two-finger gripper equipped with GelSight
high-resolution tactile sensors on each finger. Across extensive experiments,
our approach outperforms a variety of baselines at (i) estimating grasp
adjustment outcomes, (ii) selecting efficient grasp adjustments for quick
grasping, and (iii) reducing the amount of force applied at the fingers, while
maintaining competitive performance. Finally, we study the choices made by our
model and show that it has successfully acquired useful and interpretable
grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL).
Website: https://sites.google.com/view/more-than-a-feelin
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
- …