627,068 research outputs found
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
Gaussian-Process-based Robot Learning from Demonstration
Endowed with higher levels of autonomy, robots are required to perform
increasingly complex manipulation tasks. Learning from demonstration is arising
as a promising paradigm for transferring skills to robots. It allows to
implicitly learn task constraints from observing the motion executed by a human
teacher, which can enable adaptive behavior. We present a novel
Gaussian-Process-based learning from demonstration approach. This probabilistic
representation allows to generalize over multiple demonstrations, and encode
variability along the different phases of the task. In this paper, we address
how Gaussian Processes can be used to effectively learn a policy from
trajectories in task space. We also present a method to efficiently adapt the
policy to fulfill new requirements, and to modulate the robot behavior as a
function of task variability. This approach is illustrated through a real-world
application using the TIAGo robot.Comment: 8 pages, 10 figure
Learning Task Constraints from Demonstration for Hybrid Force/Position Control
We present a novel method for learning hybrid force/position control from
demonstration. We learn a dynamic constraint frame aligned to the direction of
desired force using Cartesian Dynamic Movement Primitives. In contrast to
approaches that utilize a fixed constraint frame, our approach easily
accommodates tasks with rapidly changing task constraints over time. We
activate only one degree of freedom for force control at any given time,
ensuring motion is always possible orthogonal to the direction of desired
force. Since we utilize demonstrated forces to learn the constraint frame, we
are able to compensate for forces not detected by methods that learn only from
the demonstrated kinematic motion, such as frictional forces between the
end-effector and the contact surface. We additionally propose novel extensions
to the Dynamic Movement Primitive (DMP) framework that encourage robust
transition from free-space motion to in-contact motion in spite of environment
uncertainty. We incorporate force feedback and a dynamically shifting goal to
reduce forces applied to the environment and retain stable contact while
enabling force control. Our methods exhibit low impact forces on contact and
low steady-state tracking error.Comment: Under revie
- …
