154 research outputs found
Semantic Robot Programming for Goal-Directed Manipulation in Cluttered Scenes
We present the Semantic Robot Programming (SRP) paradigm as a convergence of
robot programming by demonstration and semantic mapping. In SRP, a user can
directly program a robot manipulator by demonstrating a snapshot of their
intended goal scene in workspace. The robot then parses this goal as a scene
graph comprised of object poses and inter-object relations, assuming known
object geometries. Task and motion planning is then used to realize the user's
goal from an arbitrary initial scene configuration. Even when faced with
different initial scene configurations, SRP enables the robot to seamlessly
adapt to reach the user's demonstrated goal. For scene perception, we propose
the Discriminatively-Informed Generative Estimation of Scenes and Transforms
(DIGEST) method to infer the initial and goal states of the world from RGBD
images. The efficacy of SRP with DIGEST perception is demonstrated for the task
of tray-setting with a Michigan Progress Fetch robot. Scene perception and task
execution are evaluated with a public household occlusion dataset and our
cluttered scene dataset.Comment: published in ICRA 201
Experiments with hierarchical reinforcement learning of multiple grasping policies
Robotic grasping has attracted considerable interest, but it
still remains a challenging task. The data-driven approach is a promising
solution to the robotic grasping problem; this approach leverages a
grasp dataset and generalizes grasps for various objects. However, these
methods often depend on the quality of the given datasets, which are not
trivial to obtain with sufficient quality. Although reinforcement learning
approaches have been recently used to achieve autonomous collection
of grasp datasets, the existing algorithms are often limited to specific
grasp types. In this paper, we present a framework for hierarchical reinforcement
learning of grasping policies. In our framework, the lowerlevel
hierarchy learns multiple grasp types, and the upper-level hierarchy
learns a policy to select from the learned grasp types according to a point
cloud of a new object. Through experiments, we validate that our approach
learns grasping by constructing the grasp dataset autonomously.
The experimental results show that our approach learns multiple grasping
policies and generalizes the learned grasps by using local point cloud
information
- …