379 research outputs found
Compliant polymeric actuators as robot drive units
A co-polymer made from Polyvinyl Alcohol and Polyacrylic Acid (PVA-PAA) has been synthesized to form new robotic actuation systems which use the contractile and variable compliance properties of this material. The stimulation of these fibres is studied (particularly chemical activation using acetone and water), as are the factors which influence the response, especially those relating to its performance as an artificial muscle.Mathematical models and simulations of the dynamics of the polymeric strips have been developed, permitting a thorough analysis of the performance determining parameters. Using these models a control strategy has been designed and implemented, with experimental results being obtained for a gripper powered by a flexor/extensor pair formed using these polymeric actuators.An investigation of a second property of the polymer, its variable compliance is also included. Use of this feature has lead to the design, construction and testing of a multi degree-of-freedom dextrous hand, which despite having only a single actuator, can exercise independent control over each joint
Learning Task Priorities from Demonstrations
Bimanual operations in humanoids offer the possibility to carry out more than
one manipulation task at the same time, which in turn introduces the problem of
task prioritization. We address this problem from a learning from demonstration
perspective, by extending the Task-Parameterized Gaussian Mixture Model
(TP-GMM) to Jacobian and null space structures. The proposed approach is tested
on bimanual skills but can be applied in any scenario where the prioritization
between potentially conflicting tasks needs to be learned. We evaluate the
proposed framework in: two different tasks with humanoids requiring the
learning of priorities and a loco-manipulation scenario, showing that the
approach can be exploited to learn the prioritization of multiple tasks in
parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic
Geometry-aware Manipulability Learning, Tracking and Transfer
Body posture influences human and robots performance in manipulation tasks,
as appropriate poses facilitate motion or force exertion along different axes.
In robotics, manipulability ellipsoids arise as a powerful descriptor to
analyze, control and design the robot dexterity as a function of the
articulatory joint configuration. This descriptor can be designed according to
different task requirements, such as tracking a desired position or apply a
specific force. In this context, this paper presents a novel
\emph{manipulability transfer} framework, a method that allows robots to learn
and reproduce manipulability ellipsoids from expert demonstrations. The
proposed learning scheme is built on a tensor-based formulation of a Gaussian
mixture model that takes into account that manipulability ellipsoids lie on the
manifold of symmetric positive definite matrices. Learning is coupled with a
geometry-aware tracking controller allowing robots to follow a desired profile
of manipulability ellipsoids. Extensive evaluations in simulation with
redundant manipulators, a robotic hand and humanoids agents, as well as an
experiment with two real dual-arm systems validate the feasibility of the
approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research
(IJRR). Website: https://sites.google.com/view/manipulability. Code:
https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3
tables, 4 appendice
Real-Time 6DOF Pose Relocalization for Event Cameras with Stacked Spatial LSTM Networks
We present a new method to relocalize the 6DOF pose of an event camera solely
based on the event stream. Our method first creates the event image from a list
of events that occurs in a very short time interval, then a Stacked Spatial
LSTM Network (SP-LSTM) is used to learn the camera pose. Our SP-LSTM is
composed of a CNN to learn deep features from the event images and a stack of
LSTM to learn spatial dependencies in the image feature space. We show that the
spatial dependency plays an important role in the relocalization task and the
SP-LSTM can effectively learn this information. The experimental results on a
publicly available dataset show that our approach generalizes well and
outperforms recent methods by a substantial margin. Overall, our proposed
method reduces by approx. 6 times the position error and 3 times the
orientation error compared to the current state of the art. The source code and
trained models will be released.Comment: 7 pages, 5 figure
Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks
We present a new method to translate videos to commands for robotic
manipulation using Deep Recurrent Neural Networks (RNN). Our framework first
extracts deep features from the input video frames with a deep Convolutional
Neural Networks (CNN). Two RNN layers with an encoder-decoder architecture are
then used to encode the visual features and sequentially generate the output
words as the command. We demonstrate that the translation accuracy can be
improved by allowing a smooth transaction between two RNN layers and using the
state-of-the-art feature extractor. The experimental results on our new
challenging dataset show that our approach outperforms recent methods by a fair
margin. Furthermore, we combine the proposed translation module with the vision
and planning system to let a robot perform various manipulation tasks. Finally,
we demonstrate the effectiveness of our framework on a full-size humanoid robot
WALK-MAN
- …