8 research outputs found
Active model learning and diverse action sampling for task and motion planning
The objective of this work is to augment the basic abilities of a robot by
learning to use new sensorimotor primitives to enable the solution of complex
long-horizon problems. Solving long-horizon problems in complex domains
requires flexible generative planning that can combine primitive abilities in
novel combinations to solve problems as they arise in the world. In order to
plan to combine primitive actions, we must have models of the preconditions and
effects of those actions: under what circumstances will executing this
primitive achieve some particular effect in the world?
We use, and develop novel improvements on, state-of-the-art methods for
active learning and sampling. We use Gaussian process methods for learning the
conditions of operator effectiveness from small numbers of expensive training
examples collected by experimentation on a robot. We develop adaptive sampling
methods for generating diverse elements of continuous sets (such as robot
configurations and object poses) during planning for solving a new task, so
that planning is as efficient as possible. We demonstrate these methods in an
integrated system, combining newly learned models with an efficient
continuous-space robot task and motion planner to learn to solve long horizon
problems more efficiently than was previously possible.Comment: Proceedings of the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Madrid, Spain.
https://www.youtube.com/playlist?list=PLoWhBFPMfSzDbc8CYelsbHZa1d3uz-W_
Learning for a robot:deep reinforcement learning, imitation learning, transfer learning
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed
Model-free vision-based shaping of deformable plastic materials
We address the problem of shaping deformable plastic materials using
non-prehensile actions. Shaping plastic objects is challenging, since they are
difficult to model and to track visually. We study this problem, by using
kinetic sand, a plastic toy material which mimics the physical properties of
wet sand. Inspired by a pilot study where humans shape kinetic sand, we define
two types of actions: \textit{pushing} the material from the sides and
\textit{tapping} from above. The chosen actions are executed with a robotic arm
using image-based visual servoing. From the current and desired view of the
material, we define states based on visual features such as the outer contour
shape and the pixel luminosity values. These are mapped to actions, which are
repeated iteratively to reduce the image error until convergence is reached.
For pushing, we propose three methods for mapping the visual state to an
action. These include heuristic methods and a neural network, trained from
human actions. We show that it is possible to obtain simple shapes with the
kinetic sand, without explicitly modeling the material. Our approach is limited
in the types of shapes it can achieve. A richer set of action types and
multi-step reasoning is needed to achieve more sophisticated shapes.Comment: Accepted to The International Journal of Robotics Research (IJRR