8,973 research outputs found
Qualitative robot planning of object moving by pushing
The thesis belongs to the field of Artificial Intelligence, robotics and qualitative reasoning. The purpose of the work is to use a qualitative simulator for planning qualitative actions of a robot. Our modification of the known QSIM algorithm generates state space, which we search with the heuristic search algorithm A*. Implementations of all algorithms are written in the programming language Prolog. Some machine learning algorithms induce qualitative models using QDE constraints that are not defined in the original QSIM algorithm. One of these QDE constraints is the monotonicity in multiple variables. This QDE constraint was implemented and tested on an artificial domain. Generated robot plans have been tested on an object pushing simulator, which is based on the Box2D engine. For this purpose, an algorithm for plan execution was developed. This plan execution algorithm communicates through an interface, which was also developed as part of the thesis. The interface is responsible for a conversion of numerical data into qualitative states. The interface also implements execution of qualitative actions on the simulator. Plans developed by the proposed algorithm have been tested in two object pushing domains: the case of pushing a vertical cylinder and the case of pushing a block. For this purpose, there was a hand-built qualitative model for each domain. The thesis is concluded with an examination of achieved objectives, a review of potential challenges in the implementation of algorithms and a review of ideas for further research
Deep Visual Foresight for Planning Robot Motion
A key challenge in scaling up robot learning to many skills and environments
is removing the need for human supervision, so that robots can collect their
own data and improve their own performance without being limited by the cost of
requesting human feedback. Model-based reinforcement learning holds the promise
of enabling an agent to learn to predict the effects of its actions, which
could provide flexible predictive models for a wide range of tasks and
environments, without detailed human supervision. We develop a method for
combining deep action-conditioned video prediction models with model-predictive
control that uses entirely unlabeled training data. Our approach does not
require a calibrated camera, an instrumented training set-up, nor precise
sensing and actuation. Our results show that our method enables a real robot to
perform nonprehensile manipulation -- pushing objects -- and can handle novel
objects not seen during training.Comment: ICRA 2017. Supplementary video:
https://sites.google.com/site/robotforesight
Combining Physical Simulators and Object-Based Networks for Control
Physics engines play an important role in robot planning and control;
however, many real-world control problems involve complex contact dynamics that
cannot be characterized analytically. Most physics engines therefore employ .
approximations that lead to a loss in precision. In this paper, we propose a
hybrid dynamics model, simulator-augmented interaction networks (SAIN),
combining a physics engine with an object-based neural network for dynamics
modeling. Compared with existing models that are purely analytical or purely
data-driven, our hybrid model captures the dynamics of interacting objects in a
more accurate and data-efficient manner.Experiments both in simulation and on a
real robot suggest that it also leads to better performance when used in
complex control tasks. Finally, we show that our model generalizes to novel
environments with varying object shapes and materials.Comment: ICRA 2019; Project page: http://sain.csail.mit.ed
Learning to Singulate Objects using a Push Proposal Network
Learning to act in unstructured environments, such as cluttered piles of
objects, poses a substantial challenge for manipulation robots. We present a
novel neural network-based approach that separates unknown objects in clutter
by selecting favourable push actions. Our network is trained from data
collected through autonomous interaction of a PR2 robot with randomly organized
tabletop scenes. The model is designed to propose meaningful push actions based
on over-segmented RGB-D images. We evaluate our approach by singulating up to 8
unknown objects in clutter. We demonstrate that our method enables the robot to
perform the task with a high success rate and a low number of required push
actions. Our results based on real-world experiments show that our network is
able to generalize to novel objects of various sizes and shapes, as well as to
arbitrary object configurations. Videos of our experiments can be viewed at
http://robotpush.cs.uni-freiburg.deComment: International Symposium on Robotics Research (ISRR) 2017, videos:
http://robotpush.cs.uni-freiburg.d
Realtime State Estimation with Tactile and Visual sensing. Application to Planar Manipulation
Accurate and robust object state estimation enables successful object
manipulation. Visual sensing is widely used to estimate object poses. However,
in a cluttered scene or in a tight workspace, the robot's end-effector often
occludes the object from the visual sensor. The robot then loses visual
feedback and must fall back on open-loop execution.
In this paper, we integrate both tactile and visual input using a framework
for solving the SLAM problem, incremental smoothing and mapping (iSAM), to
provide a fast and flexible solution. Visual sensing provides global pose
information but is noisy in general, whereas contact sensing is local, but its
measurements are more accurate relative to the end-effector. By combining them,
we aim to exploit their advantages and overcome their limitations. We explore
the technique in the context of a pusher-slider system. We adapt iSAM's
measurement cost and motion cost to the pushing scenario, and use an
instrumented setup to evaluate the estimation quality with different object
shapes, on different surface materials, and under different contact modes
Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning
Skilled robotic manipulation benefits from complex synergies between
non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing
can help rearrange cluttered objects to make space for arms and fingers;
likewise, grasping can help displace objects to make pushing movements more
precise and collision-free. In this work, we demonstrate that it is possible to
discover and learn these synergies from scratch through model-free deep
reinforcement learning. Our method involves training two fully convolutional
networks that map from visual observations to actions: one infers the utility
of pushes for a dense pixel-wise sampling of end effector orientations and
locations, while the other does the same for grasping. Both networks are
trained jointly in a Q-learning framework and are entirely self-supervised by
trial and error, where rewards are provided from successful grasps. In this
way, our policy learns pushing motions that enable future grasps, while
learning grasps that can leverage past pushes. During picking experiments in
both simulation and real-world scenarios, we find that our system quickly
learns complex behaviors amid challenging cases of clutter, and achieves better
grasping success rates and picking efficiencies than baseline alternatives
after only a few hours of training. We further demonstrate that our method is
capable of generalizing to novel objects. Qualitative results (videos), code,
pre-trained models, and simulation environments are available at
http://vpg.cs.princeton.eduComment: To appear at the International Conference On Intelligent Robots and
Systems (IROS) 2018. Project webpage: http://vpg.cs.princeton.edu Summary
video: https://youtu.be/-OkyX7Zlhi
- …