3,420 research outputs found
Learning Contact-Rich Manipulation Skills with Guided Policy Search
Autonomous learning of object manipulation skills can enable robots to
acquire rich behavioral repertoires that scale to the variety of objects found
in the real world. However, current motion skill learning methods typically
restrict the behavior to a compact, low-dimensional representation, limiting
its expressiveness and generality. In this paper, we extend a recently
developed policy search method \cite{la-lnnpg-14} and use it to learn a range
of dynamic manipulation behaviors with highly general policy representations,
without using known models or example demonstrations. Our approach learns a set
of trajectories for the desired motion skill by using iteratively refitted
time-varying linear models, and then unifies these trajectories into a single
control policy that can generalize to new situations. To enable this method to
run on a real robot, we introduce several improvements that reduce the sample
count and automate parameter selection. We show that our method can acquire
fast, fluent behaviors after only minutes of interaction time, and can learn
robust controllers for complex tasks, including putting together a toy
airplane, stacking tight-fitting lego blocks, placing wooden rings onto
tight-fitting pegs, inserting a shoe tree into a shoe, and screwing bottle caps
onto bottles
Iterative Machine Learning for Precision Trajectory Tracking with Series Elastic Actuators
When robots operate in unknown environments small errors in postions can lead
to large variations in the contact forces, especially with typical
high-impedance designs. This can potentially damage the surroundings and/or the
robot. Series elastic actuators (SEAs) are a popular way to reduce the output
impedance of a robotic arm to improve control authority over the force exerted
on the environment. However this increased control over forces with lower
impedance comes at the cost of lower positioning precision and bandwidth. This
article examines the use of an iteratively-learned feedforward command to
improve position tracking when using SEAs. Over each iteration, the output
responses of the system to the quantized inputs are used to estimate a
linearized local system models. These estimated models are obtained using a
complex-valued Gaussian Process Regression (cGPR) technique and then, used to
generate a new feedforward input command based on the previous iteration's
error. This article illustrates this iterative machine learning (IML) technique
for a two degree of freedom (2-DOF) robotic arm, and demonstrates successful
convergence of the IML approach to reduce the tracking error.Comment: 9 pages, 16 figure. Submitted to AMC Worksho
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
It is crucial to ask how agents can achieve goals by generating action plans
using only partial models of the world acquired through habituated
sensory-motor experiences. Although many existing robotics studies use a
forward model framework, there are generalization issues with high degrees of
freedom. The current study shows that the predictive coding (PC) and active
inference (AIF) frameworks, which employ a generative model, can develop better
generalization by learning a prior distribution in a low dimensional latent
state space representing probabilistic structures extracted from well
habituated sensory-motor trajectories. In our proposed model, learning is
carried out by inferring optimal latent variables as well as synaptic weights
for maximizing the evidence lower bound, while goal-directed planning is
accomplished by inferring latent variables for maximizing the estimated lower
bound. Our proposed model was evaluated with both simple and complex robotic
tasks in simulation, which demonstrated sufficient generalization in learning
with limited training data by setting an intermediate value for a
regularization coefficient. Furthermore, comparative simulation results show
that the proposed model outperforms a conventional forward model in
goal-directed planning, due to the learned prior confining the search of motor
plans within the range of habituated trajectories.Comment: 30 pages, 19 figure
Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search
In principle, reinforcement learning and policy search methods can enable
robots to learn highly complex and general skills that may allow them to
function amid the complexity and diversity of the real world. However, training
a policy that generalizes well across a wide range of real-world conditions
requires far greater quantity and diversity of experience than is practical to
collect with a single robot. Fortunately, it is possible for multiple robots to
share their experience with one another, and thereby, learn a policy
collectively. In this work, we explore distributed and asynchronous policy
learning as a means to achieve generalization and improved training times on
challenging, real-world manipulation tasks. We propose a distributed and
asynchronous version of Guided Policy Search and use it to demonstrate
collective policy learning on a vision-based door opening task using four
robots. We show that it achieves better generalization, utilization, and
training times than the single robot alternative.Comment: Submitted to the IEEE International Conference on Robotics and
Automation 201
Computational neural learning formalisms for manipulator inverse kinematics
An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints
A Review of Compliant Movement Primitives
Dynamical models of robots performing tasks in contact with objects or the environment are difficult to obtain. Therefore, different methods of learning the dynamics of tasks have been proposed. In this chapter, we present a method that provides the joint torques needed to execute a task in a compliant and at the same time accurate manner. The presented method of compliant movement primitives (CMPs), which consists of the task kinematical and dynamical trajectories, goes beyond mere reproduction of previously learned motions. Using statistical generalization, the method allows to generate new, previously untrained trajectories. Furthermore, the use of transition graphs allows us to combine parts of previously learned motions and thus generate new ones. In the chapter, we provide a brief overview of this research topic in the literature, followed by an in-depth explanation of the compliant movement primitives framework, with details on both statistical generalization and transition graphs. An extensive experimental evaluation demonstrates the applicability and the usefulness of the approach
- …