2,632 research outputs found
Robot Manipulation Task Learning by Leveraging SE(3) Group Invariance and Equivariance
This paper presents a differential geometric control approach that leverages
SE(3) group invariance and equivariance to increase transferability in learning
robot manipulation tasks that involve interaction with the environment.
Specifically, we employ a control law and a learning representation framework
that remain invariant under arbitrary SE(3) transformations of the manipulation
task definition. Furthermore, the control law and learning representation
framework are shown to be SE(3) equivariant when represented relative to the
spatial frame. The proposed approach is based on utilizing a recently presented
geometric impedance control (GIC) combined with a learning variable impedance
control framework, where the gain scheduling policy is trained in a supervised
learning fashion from expert demonstrations. A geometrically consistent error
vector (GCEV) is fed to a neural network to achieve a gain scheduling policy
that remains invariant to arbitrary translation and rotations. A comparison of
our proposed control and learning framework with a well-known Cartesian space
learning impedance control, equipped with a Cartesian error vector-based gain
scheduling policy, confirms the significantly superior learning transferability
of our proposed approach. A hardware implementation on a peg-in-hole task is
conducted to validate the learning transferability and feasibility of the
proposed approach
A Learning-based Adaptive Compliance Method for Symmetric Bi-manual Manipulation
Symmetric bi-manual manipulation is essential for various on-orbit operations
due to its potent load capacity. As a result, there exists an emerging research
interest in the problem of achieving high operation accuracy while enhancing
adaptability and compliance. However, previous works relied on an inefficient
algorithm framework that separates motion planning from compliant control.
Additionally, the compliant controller lacks robustness due to manually
adjusted parameters. This paper proposes a novel Learning-based Adaptive
Compliance algorithm (LAC) that improves the efficiency and robustness of
symmetric bi-manual manipulation. Specifically, first, the algorithm framework
combines desired trajectory generation with impedance-parameter adjustment to
improve efficiency and robustness. Second, we introduce a centralized
Actor-Critic framework with LSTM networks, enhancing the synchronization of
bi-manual manipulation. LSTM networks pre-process the force states obtained by
the agents, further ameliorating the performance of compliance operations. When
evaluated in the dual-arm cooperative handling and peg-in-hole assembly
experiments, our method outperforms baseline algorithms in terms of optimality
and robustness.Comment: 12 pages, 10 figure
Computational neurorehabilitation: modeling plasticity and learning to predict recovery
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling – regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity
Learning to Avoid Obstacles With Minimal Intervention Control
Programming by demonstration has received much attention as it offers a general framework which allows robots to efficiently acquire novel motor skills from a human teacher. While traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important (e.g., writing or striking task), hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has been studied recently. However, an important issue which often arises in dynamical or unstructured environments is overlooked, namely how can a robot avoid obstacles? In this paper, we aim to address the problem of avoiding obstacles in the context of hybrid imitation learning. Specifically, we propose to tackle three subproblems: (i) designing a proper potential field so as to bypass obstacles, (ii) guaranteeing joint limits are respected when adjusting trajectories in the process of avoiding obstacles, and (iii) determining proper control commands for robots such that potential human-robot interaction is safe. By solving the aforementioned subproblems, the robot is capable of generalizing observed skills to new situations featuring obstacles in a feasible and safe manner. The effectiveness of the proposed method is validated through a toy example as well as a real transportation experiment on the iCub humanoid robot
Geometric Reinforcement Learning For Robotic Manipulation
Reinforcement learning (RL) is a popular technique that allows an agent to
learn by trial and error while interacting with a dynamic environment. The
traditional Reinforcement Learning (RL) approach has been successful in
learning and predicting Euclidean robotic manipulation skills such as
positions, velocities, and forces. However, in robotics, it is common to
encounter non-Euclidean data such as orientation or stiffness, and failing to
account for their geometric nature can negatively impact learning accuracy and
performance. In this paper, to address this challenge, we propose a novel
framework for RL that leverages Riemannian geometry, which we call Geometric
Reinforcement Learning (G-RL), to enable agents to learn robotic manipulation
skills with non-Euclidean data. Specifically, G-RL utilizes the tangent space
in two ways: a tangent space for parameterization and a local tangent space for
mapping to a nonEuclidean manifold. The policy is learned in the
parameterization tangent space, which remains constant throughout the training.
The policy is then transferred to the local tangent space via parallel
transport and projected onto the non-Euclidean manifold. The local tangent
space changes over time to remain within the neighborhood of the current
manifold point, reducing the approximation error. Therefore, by introducing a
geometrically grounded pre- and post-processing step into the traditional RL
pipeline, our G-RL framework enables several model-free algorithms designed for
Euclidean space to learn from non-Euclidean data without modifications.
Experimental results, obtained both in simulation and on a real robot, support
our hypothesis that G-RL is more accurate and converges to a better solution
than approximating non-Euclidean data.Comment: 14 pages, 14 figures, journa
- …