3,545 research outputs found
Learning Feedback Terms for Reactive Planning and Control
With the advancement of robotics, machine learning, and machine perception,
increasingly more robots will enter human environments to assist with daily
tasks. However, dynamically-changing human environments requires reactive
motion plans. Reactivity can be accomplished through replanning, e.g.
model-predictive control, or through a reactive feedback policy that modifies
on-going behavior in response to sensory events. In this paper, we investigate
how to use machine learning to add reactivity to a previously learned nominal
skilled behavior. We approach this by learning a reactive modification term for
movement plans represented by nonlinear differential equations. In particular,
we use dynamic movement primitives (DMPs) to represent a skill and a neural
network to learn a reactive policy from human demonstrations. We use the well
explored domain of obstacle avoidance for robot manipulation as a test bed. Our
approach demonstrates how a neural network can be combined with physical
insights to ensure robust behavior across different obstacle settings and
movement durations. Evaluations on an anthropomorphic robotic system
demonstrate the effectiveness of our work.Comment: 8 pages, accepted to be published at ICRA 2017 conferenc
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks
In order to robustly execute a task under environmental uncertainty, a robot
needs to be able to reactively adapt to changes arising in its environment. The
environment changes are usually reflected in deviation from expected sensory
traces. These deviations in sensory traces can be used to drive the motion
adaptation, and for this purpose, a feedback model is required. The feedback
model maps the deviations in sensory traces to the motion plan adaptation. In
this paper, we develop a general data-driven framework for learning a feedback
model from demonstrations. We utilize a variant of a radial basis function
network structure --with movement phases as kernel centers-- which can
generally be applied to represent any feedback models for movement primitives.
To demonstrate the effectiveness of our framework, we test it on the task of
scraping on a tilt board. In this task, we are learning a reactive policy in
the form of orientation adaptation, based on deviations of tactile sensor
traces. As a proof of concept of our method, we provide evaluations on an
anthropomorphic robot. A video demonstrating our approach and its results can
be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on
Robotics and Automation (ICRA) 201
Realtime State Estimation with Tactile and Visual sensing. Application to Planar Manipulation
Accurate and robust object state estimation enables successful object
manipulation. Visual sensing is widely used to estimate object poses. However,
in a cluttered scene or in a tight workspace, the robot's end-effector often
occludes the object from the visual sensor. The robot then loses visual
feedback and must fall back on open-loop execution.
In this paper, we integrate both tactile and visual input using a framework
for solving the SLAM problem, incremental smoothing and mapping (iSAM), to
provide a fast and flexible solution. Visual sensing provides global pose
information but is noisy in general, whereas contact sensing is local, but its
measurements are more accurate relative to the end-effector. By combining them,
we aim to exploit their advantages and overcome their limitations. We explore
the technique in the context of a pusher-slider system. We adapt iSAM's
measurement cost and motion cost to the pushing scenario, and use an
instrumented setup to evaluate the estimation quality with different object
shapes, on different surface materials, and under different contact modes
Robots as Powerful Allies for the Study of Embodied Cognition from the Bottom Up
A large body of compelling evidence has been accumulated demonstrating that embodiment – the agent’s physical setup, including its shape, materials, sensors and actuators – is constitutive for any form of cognition and as a consequence, models of cognition need to be embodied. In contrast to methods from empirical sciences to study cognition, robots can be freely manipulated and virtually all key variables of their embodiment and control programs can be systematically varied. As such, they provide an extremely powerful tool of investigation. We present a robotic bottom-up or developmental approach, focusing on three stages: (a) low-level behaviors like walking and reflexes, (b) learning regularities in sensorimotor spaces, and (c) human-like cognition. We also show that robotic based research is not only a productive path to deepening our understanding of cognition, but that robots can strongly benefit from human-like cognition in order to become more autonomous, robust, resilient, and safe
Learning Latent Space Dynamics for Tactile Servoing
To achieve a dexterous robotic manipulation, we need to endow our robot with
tactile feedback capability, i.e. the ability to drive action based on tactile
sensing. In this paper, we specifically address the challenge of tactile
servoing, i.e. given the current tactile sensing and a target/goal tactile
sensing --memorized from a successful task execution in the past-- what is the
action that will bring the current tactile sensing to move closer towards the
target tactile sensing at the next time step. We develop a data-driven approach
to acquire a dynamics model for tactile servoing by learning from
demonstration. Moreover, our method represents the tactile sensing information
as to lie on a surface --or a 2D manifold-- and perform a manifold learning,
making it applicable to any tactile skin geometry. We evaluate our method on a
contact point tracking task using a robot equipped with a tactile finger. A
video demonstrating our approach can be seen in https://youtu.be/0QK0-Vx7WkIComment: Accepted to be published at the International Conference on Robotics
and Automation (ICRA) 2019. The final version for publication at ICRA 2019 is
7 pages (i.e. 6 pages of technical content (including text, figures, tables,
acknowledgement, etc.) and 1 page of the Bibliography/References), while this
arXiv version is 8 pages (added Appendix and some extra details
- …