582 research outputs found
Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics
Developmental robotics is an emerging field located
at the intersection of developmental psychology
and robotics, that has lately attracted
quite some attention. This paper gives a survey of
a variety of research projects dealing with or inspired
by developmental issues, and outlines possible
future directions
Transformer-based deep imitation learning for dual-arm robot manipulation
Deep imitation learning is promising for solving dexterous manipulation tasks
because it does not require an environment model and pre-programmed robot
behavior. However, its application to dual-arm manipulation tasks remains
challenging. In a dual-arm manipulation setup, the increased number of state
dimensions caused by the additional robot manipulators causes distractions and
results in poor performance of the neural networks. We address this issue using
a self-attention mechanism that computes dependencies between elements in a
sequential input and focuses on important elements. A Transformer, a variant of
self-attention architecture, is applied to deep imitation learning to solve
dual-arm manipulation tasks in the real world. The proposed method has been
tested on dual-arm manipulation tasks using a real robot. The experimental
results demonstrated that the Transformer-based deep imitation learning
architecture can attend to the important features among the sensory inputs,
therefore reducing distractions and improving manipulation performance when
compared with the baseline architecture without the self-attention mechanisms.Comment: 8 pages. Accepted in 2021 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS
An Energy-based Approach to Ensure the Stability of Learned Dynamical Systems
Non-linear dynamical systems represent a compact, flexible, and robust tool
for reactive motion generation. The effectiveness of dynamical systems relies
on their ability to accurately represent stable motions. Several approaches
have been proposed to learn stable and accurate motions from demonstration.
Some approaches work by separating accuracy and stability into two learning
problems, which increases the number of open parameters and the overall
training time. Alternative solutions exploit single-step learning but restrict
the applicability to one regression technique. This paper presents a
single-step approach to learn stable and accurate motions that work with any
regression technique. The approach makes energy considerations on the learned
dynamics to stabilize the system at run-time while introducing small deviations
from the demonstrated motion. Since the initial value of the energy injected
into the system affects the reproduction accuracy, it is estimated from
training data using an efficient procedure. Experiments on a real robot and a
comparison on a public benchmark shows the effectiveness of the proposed
approach.Comment: Accepted at the International Conference on Robotics and Automation
202
Merging Position and Orientation Motion Primitives
In this paper, we focus on generating complex robotic trajectories by merging
sequential motion primitives. A robotic trajectory is a time series of
positions and orientations ending at a desired target. Hence, we first discuss
the generation of converging pose trajectories via dynamical systems, providing
a rigorous stability analysis. Then, we present approaches to merge motion
primitives which represent both the position and the orientation part of the
motion. Developed approaches preserve the shape of each learned movement and
allow for continuous transitions among succeeding motion primitives. Presented
methodologies are theoretically described and experimentally evaluated, showing
that it is possible to generate a smooth pose trajectory out of multiple motion
primitives
Interactive Imitation Learning of Bimanual Movement Primitives
Performing bimanual tasks with dual robotic setups can drastically increase
the impact on industrial and daily life applications. However, performing a
bimanual task brings many challenges, like synchronization and coordination of
the single-arm policies. This article proposes the Safe, Interactive Movement
Primitives Learning (SIMPLe) algorithm, to teach and correct single or dual arm
impedance policies directly from human kinesthetic demonstrations. Moreover, it
proposes a novel graph encoding of the policy based on Gaussian Process
Regression (GPR) where the single-arm motion is guaranteed to converge close to
the trajectory and then towards the demonstrated goal. Regulation of the robot
stiffness according to the epistemic uncertainty of the policy allows for
easily reshaping the motion with human feedback and/or adapting to external
perturbations. We tested the SIMPLe algorithm on a real dual-arm setup where
the teacher gave separate single-arm demonstrations and then successfully
synchronized them only using kinesthetic feedback or where the original
bimanual demonstration was locally reshaped to pick a box at a different
height
Flexible Task Execution and Cognitive Control in Human-Robot Interaction
A robotic system that interacts with humans is expected to flexibly execute structured cooperative tasks while reacting to unexpected events and behaviors.
In this thesis, these issues are faced presenting a framework that integrates cognitive control, executive attention, structured task execution and learning.
In the proposed approach, the execution of structured tasks is guided by top-down (task-oriented) and bottom-up (stimuli-driven) attentional processes that affect behavior selection and activation, while resolving conflicts and decisional impasses. Specifically, attention is here deployed to stimulate the activations of multiple hierarchical behaviors orienting them towards the execution of finalized and interactive activities. On the other hand, this framework allows a human to indirectly and smoothly influence the robotic task execution exploiting attention manipulation.
We provide an overview of the overall system architecture discussing the framework at work in different applicative contexts. In particular, we show that multiple concurrent tasks/plans can be effectively orchestrated and interleaved in a flexible manner; moreover, in a human-robot interaction setting, we test and assess the effectiveness of attention manipulation and learning processes
Imitation Learning-based Visual Servoing for Tracking Moving Objects
In everyday life collaboration tasks between human operators and robots, the
former necessitate simple ways for programming new skills, the latter have to
show adaptive capabilities to cope with environmental changes. The joint use of
visual servoing and imitation learning allows us to pursue the objective of
realizing friendly robotic interfaces that (i) are able to adapt to the
environment thanks to the use of visual perception and (ii) avoid explicit
programming thanks to the emulation of previous demonstrations. This work aims
to exploit imitation learning for the visual servoing paradigm to address the
specific problem of tracking moving objects. In particular, we show that it is
possible to infer from data the compensation term required for realizing the
tracking controller, avoiding the explicit implementation of estimators or
observers. The effectiveness of the proposed method has been validated through
simulations with a robotic manipulator.Comment: International Workshop on Human-Friendly Robotics (HFR), 202
Symbolic Task Compression in Structured Task Learning
Learning everyday tasks from human demonstrations requires unsupervised segmentation of seamless demonstrations, which may result in highly fragmented and widely
spread symbolic representations. Since the time needed to plan
the task depends on the amount of possible behaviors, it is
preferable to keep the number of behaviors as low as possible.
In this work, we present an approach to simplify the symbolic
representation of a learned task which leads to a reduction of the
number of possible behaviors. The simplification is achieved by
merging sequential behaviors, i.e. behaviors which are logically
sequential and act on the same object. Assuming that the task
at hand is encoded in a rooted tree, the approach traverses the
tree searching for sequential nodes (behaviors) to merge. Using
simple rules to assign pre- and post-conditions to each node,
our approach significantly reduces the number of nodes, while
keeping unaltered the task flexibility and avoiding perceptual
aliasing. Experiments on automatically generated and learned
tasks show a significant reduction of the planning time
- …