2,600 research outputs found
Visual Closed-Loop Control for Pouring Liquids
Pouring a specific amount of liquid is a challenging task. In this paper we
develop methods for robots to use visual feedback to perform closed-loop
control for pouring liquids. We propose both a model-based and a model-free
method utilizing deep learning for estimating the volume of liquid in a
container. Our results show that the model-free method is better able to
estimate the volume. We combine this with a simple PID controller to pour
specific amounts of liquid, and show that the robot is able to achieve an
average 38ml deviation from the target amount. To our knowledge, this is the
first use of raw visual feedback to pour liquids in robotics.Comment: To appear at ICRA 201
Reasoning About Liquids via Closed-Loop Simulation
Simulators are powerful tools for reasoning about a robot's interactions with
its environment. However, when simulations diverge from reality, that reasoning
becomes less useful. In this paper, we show how to close the loop between
liquid simulation and real-time perception. We use observations of liquids to
correct errors when tracking the liquid's state in a simulator. Our results
show that closed-loop simulation is an effective way to prevent large
divergence between the simulated and real liquid states. As a direct
consequence of this, our method can enable reasoning about liquids that would
otherwise be infeasible due to large divergences, such as reasoning about
occluded liquid.Comment: Robotics: Science & Systems (RSS), July 12-16, 2017. Cambridge, MA,
US
PourIt!: Weakly-supervised Liquid Perception from a Single Image for Visual Closed-Loop Robotic Pouring
Liquid perception is critical for robotic pouring tasks. It usually requires
the robust visual detection of flowing liquid. However, while recent works have
shown promising results in liquid perception, they typically require labeled
data for model training, a process that is both time-consuming and reliant on
human labor. To this end, this paper proposes a simple yet effective framework
PourIt!, to serve as a tool for robotic pouring tasks. We design a simple data
collection pipeline that only needs image-level labels to reduce the reliance
on tedious pixel-wise annotations. Then, a binary classification model is
trained to generate Class Activation Map (CAM) that focuses on the visual
difference between these two kinds of collected data, i.e., the existence of
liquid drop or not. We also devise a feature contrast strategy to improve the
quality of the CAM, thus entirely and tightly covering the actual liquid
regions. Then, the container pose is further utilized to facilitate the 3D
point cloud recovery of the detected liquid region. Finally, the
liquid-to-container distance is calculated for visual closed-loop control of
the physical robot. To validate the effectiveness of our proposed method, we
also contribute a novel dataset for our task and name it PourIt! dataset.
Extensive results on this dataset and physical Franka robot have shown the
utility and effectiveness of our method in the robotic pouring tasks. Our
dataset, code and pre-trained models will be available on the project page.Comment: ICCV202
To Stir or Not to Stir:Online Estimation of Liquid Properties for Pouring Actions
Our brains are able to exploit coarse physical models of fluids to solve
everyday manipulation tasks. There has been considerable interest in developing
such a capability in robots so that they can autonomously manipulate fluids
adapting to different conditions. In this paper, we investigate the problem of
adaptation to liquids with different characteristics. We develop a simple
calibration task (stirring with a stick) that enables rapid inference of the
parameters of the liquid from RBG data. We perform the inference in the space
of simulation parameters rather than on physically accurate parameters. This
facilitates prediction and optimization tasks since the inferred parameters may
be fed directly to the simulator. We demonstrate that our "stirring" learner
performs better than when the robot is calibrated with pouring actions. We show
that our method is able to infer properties of three different liquids --
water, glycerin and gel -- and present experimental results by executing
stirring and pouring actions on a UR10. We believe that decoupling of the
training actions from the goal task is an important step towards simple,
autonomous learning of the behavior of different fluids in unstructured
environments.Comment: Presented at the Modeling the Physical World: Perception, Learning,
and Control Workshop (NeurIPS) 201
Making Sense of Audio Vibration for Liquid Height Estimation in Robotic Pouring
In this paper, we focus on the challenging perception problem in robotic
pouring. Most of the existing approaches either leverage visual or haptic
information. However, these techniques may suffer from poor generalization
performances on opaque containers or concerning measuring precision. To tackle
these drawbacks, we propose to make use of audio vibration sensing and design a
deep neural network PouringNet to predict the liquid height from the audio
fragment during the robotic pouring task. PouringNet is trained on our
collected real-world pouring dataset with multimodal sensing data, which
contains more than 3000 recordings of audio, force feedback, video and
trajectory data of the human hand that performs the pouring task. Each record
represents a complete pouring procedure. We conduct several evaluations on
PouringNet with our dataset and robotic hardware. The results demonstrate that
our PouringNet generalizes well across different liquid containers, positions
of the audio receiver, initial liquid heights and types of liquid, and
facilitates a more robust and accurate audio-based perception for robotic
pouring.Comment: Checkout project page for video, code and dataset:
https://lianghongzhuo.github.io/AudioPourin
- …