7 research outputs found
Active Estimation of Object Dynamics Parameters with Tactile Sensors
The estimation of parameters that affect the
dynamics of objects—such as viscosity or internal degrees of
freedom—is an important step in autonomous and dexterous
robotic manipulation of objects. However, accurate and efficient
estimation of these object parameters may be challenging due
to complex, highly nonlinear underlying physical processes. To
improve on the quality of otherwise hand-crafted solutions,
automatic generation of control strategies can be helpful.
We present a framework that uses active learning to help
with sequential gathering of data samples, using informationtheoretic
criteria to find the optimal actions to perform at each
time step. We demonstrate the usefulness of our approach on a
robotic hand-arm setup, where the task involves shaking bottles
of different liquids in order to determine the liquid’s viscosity
from only tactile feedback. We optimize the shaking frequency
and the rotation angle of shaking in an online manner in order
to speed up convergence of estimates
Active End-Effector Pose Selection for Tactile Object Recognition through Monte Carlo Tree Search
This paper considers the problem of active object recognition using touch
only. The focus is on adaptively selecting a sequence of wrist poses that
achieves accurate recognition by enclosure grasps. It seeks to minimize the
number of touches and maximize recognition confidence. The actions are
formulated as wrist poses relative to each other, making the algorithm
independent of absolute workspace coordinates. The optimal sequence is
approximated by Monte Carlo tree search. We demonstrate results in a physics
engine and on a real robot. In the physics engine, most object instances were
recognized in at most 16 grasps. On a real robot, our method recognized objects
in 2--9 grasps and outperformed a greedy baseline.Comment: Accepted to International Conference on Intelligent Robots and
Systems (IROS) 201
Active End-Effector Pose Selection for Tactile Object Recognition through Monte Carlo Tree Search
This paper considers the problem of active object recognition using touch
only. The focus is on adaptively selecting a sequence of wrist poses that
achieves accurate recognition by enclosure grasps. It seeks to minimize the
number of touches and maximize recognition confidence. The actions are
formulated as wrist poses relative to each other, making the algorithm
independent of absolute workspace coordinates. The optimal sequence is
approximated by Monte Carlo tree search. We demonstrate results in a physics
engine and on a real robot. In the physics engine, most object instances were
recognized in at most 16 grasps. On a real robot, our method recognized objects
in 2--9 grasps and outperformed a greedy baseline.Comment: Accepted to International Conference on Intelligent Robots and
Systems (IROS) 201
Making Sense of Audio Vibration for Liquid Height Estimation in Robotic Pouring
In this paper, we focus on the challenging perception problem in robotic
pouring. Most of the existing approaches either leverage visual or haptic
information. However, these techniques may suffer from poor generalization
performances on opaque containers or concerning measuring precision. To tackle
these drawbacks, we propose to make use of audio vibration sensing and design a
deep neural network PouringNet to predict the liquid height from the audio
fragment during the robotic pouring task. PouringNet is trained on our
collected real-world pouring dataset with multimodal sensing data, which
contains more than 3000 recordings of audio, force feedback, video and
trajectory data of the human hand that performs the pouring task. Each record
represents a complete pouring procedure. We conduct several evaluations on
PouringNet with our dataset and robotic hardware. The results demonstrate that
our PouringNet generalizes well across different liquid containers, positions
of the audio receiver, initial liquid heights and types of liquid, and
facilitates a more robust and accurate audio-based perception for robotic
pouring.Comment: Checkout project page for video, code and dataset:
https://lianghongzhuo.github.io/AudioPourin