17 research outputs found
Riemannian geometry as a unifying theory for robot motion learning and control
Riemannian geometry is a mathematical field which has been the cornerstone of
revolutionary scientific discoveries such as the theory of general relativity.
Despite early uses in robot design and recent applications for exploiting data
with specific geometries, it mostly remains overlooked in robotics. With this
blue sky paper, we argue that Riemannian geometry provides the most suitable
tools to analyze and generate well-coordinated, energy-efficient motions of
robots with many degrees of freedom. Via preliminary solutions and novel
research directions, we discuss how Riemannian geometry may be leveraged to
design and combine physically-meaningful synergies for robotics, and how this
theory also opens the door to coupling motion synergies with perceptual inputs.Comment: Published as a blue sky paper at ISRR'22. 8 pages, 2 figures. Video
at https://youtu.be/XblzcKRRIT
Geometry-aware Manipulability Learning, Tracking and Transfer
Body posture influences human and robots performance in manipulation tasks,
as appropriate poses facilitate motion or force exertion along different axes.
In robotics, manipulability ellipsoids arise as a powerful descriptor to
analyze, control and design the robot dexterity as a function of the
articulatory joint configuration. This descriptor can be designed according to
different task requirements, such as tracking a desired position or apply a
specific force. In this context, this paper presents a novel
\emph{manipulability transfer} framework, a method that allows robots to learn
and reproduce manipulability ellipsoids from expert demonstrations. The
proposed learning scheme is built on a tensor-based formulation of a Gaussian
mixture model that takes into account that manipulability ellipsoids lie on the
manifold of symmetric positive definite matrices. Learning is coupled with a
geometry-aware tracking controller allowing robots to follow a desired profile
of manipulability ellipsoids. Extensive evaluations in simulation with
redundant manipulators, a robotic hand and humanoids agents, as well as an
experiment with two real dual-arm systems validate the feasibility of the
approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research
(IJRR). Website: https://sites.google.com/view/manipulability. Code:
https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3
tables, 4 appendice
Analysis and Transfer of Human Movement Manipulability in Industry-like Activities
Humans exhibit outstanding learning, planning and adaptation capabilities
while performing different types of industrial tasks. Given some knowledge
about the task requirements, humans are able to plan their limbs motion in
anticipation of the execution of specific skills. For example, when an operator
needs to drill a hole on a surface, the posture of her limbs varies to
guarantee a stable configuration that is compatible with the drilling task
specifications, e.g. exerting a force orthogonal to the surface. Therefore, we
are interested in analyzing the human arms motion patterns in industrial
activities. To do so, we build our analysis on the so-called manipulability
ellipsoid, which captures a posture-dependent ability to perform motion and
exert forces along different task directions. Through thorough analysis of the
human movement manipulability, we found that the ellipsoid shape is task
dependent and often provides more information about the human motion than
classical manipulability indices. Moreover, we show how manipulability patterns
can be transferred to robots by learning a probabilistic model and employing a
manipulability tracking controller that acts on the task planning and execution
according to predefined control hierarchies.Comment: Accepted for publication in IROS'20. Website:
https://sites.google.com/view/manipulability/home . Video:
https://youtu.be/q0GZwvwW9A
On the Design of Region-Avoiding Metrics for Collision-Safe Motion Generation on Riemannian Manifolds
The generation of energy-efficient and dynamic-aware robot motions that
satisfy constraints such as joint limits, self-collisions, and collisions with
the environment remains a challenge. In this context, Riemannian geometry
offers promising solutions by identifying robot motions with geodesics on the
so-called configuration space manifold. While this manifold naturally considers
the intrinsic robot dynamics, constraints such as joint limits,
self-collisions, and collisions with the environment remain overlooked. In this
paper, we propose a modification of the Riemannian metric of the configuration
space manifold allowing for the generation of robot motions as geodesics that
efficiently avoid given regions. We introduce a class of Riemannian metrics
based on barrier functions that guarantee strict region avoidance by
systematically generating accelerations away from no-go regions in joint and
task space. We evaluate the proposed Riemannian metric to generate
energy-efficient, dynamic-aware, and collision-free motions of a humanoid robot
as geodesics and sequences thereof.Comment: Accepted for publication in IEEE/RSJ Intl. Conf. on Intelligent
Robots and Systems (IROS) 2023. 8 pages, 7 figures, accompanying video at
https://youtu.be/qT43XgYOlU
A Riemannian Take on Human Motion Analysis and Retargeting
Dynamic motions of humans and robots are widely driven by posture-dependent
nonlinear interactions between their degrees of freedom. However, these
dynamical effects remain mostly overlooked when studying the mechanisms of
human movement generation. Inspired by recent works, we hypothesize that human
motions are planned as sequences of geodesic synergies, and thus correspond to
coordinated joint movements achieved with piecewise minimum energy. The
underlying computational model is built on Riemannian geometry to account for
the inertial characteristics of the body. Through the analysis of various human
arm motions, we find that our model segments motions into geodesic synergies,
and successfully predicts observed arm postures, hand trajectories, as well as
their respective velocity profiles. Moreover, we show that our analysis can
further be exploited to transfer arm motions to robots by reproducing
individual human synergies as geodesic paths in the robot configuration space.Comment: Accepted for publication in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 202
Active Improvement of Control Policies with Bayesian Gaussian Mixture Model
Learning from demonstration (LfD) is an intuitive framework allowing
non-expert users to easily (re-)program robots. However, the quality and
quantity of demonstrations have a great influence on the generalization
performances of LfD approaches. In this paper, we introduce a novel active
learning framework in order to improve the generalization capabilities of
control policies. The proposed approach is based on the epistemic uncertainties
of Bayesian Gaussian mixture models (BGMMs). We determine the new query point
location by optimizing a closed-form information-density cost based on the
quadratic R\'enyi entropy. Furthermore, to better represent uncertain regions
and to avoid local optima problem, we propose to approximate the active
learning cost with a Gaussian mixture model (GMM). We demonstrate our active
learning framework in the context of a reaching task in a cluttered environment
with an illustrative toy example and a real experiment with a Panda robot.Comment: Accepted for publication in IROS'2
K-VIL: Keypoints-based Visual Imitation Learning
Visual imitation learning provides efficient and intuitive solutions for
robotic systems to acquire novel manipulation skills. However, simultaneously
learning geometric task constraints and control policies from visual inputs
alone remains a challenging problem. In this paper, we propose an approach for
keypoint-based visual imitation (K-VIL) that automatically extracts sparse,
object-centric, and embodiment-independent task representations from a small
number of human demonstration videos. The task representation is composed of
keypoint-based geometric constraints on principal manifolds, their associated
local frames, and the movement primitives that are then needed for the task
execution. Our approach is capable of extracting such task representations from
a single demonstration video, and of incrementally updating them when new
demonstrations become available. To reproduce manipulation skills using the
learned set of prioritized geometric constraints in novel scenes, we introduce
a novel keypoint-based admittance controller. We evaluate our approach in
several real-world applications, showcasing its ability to deal with cluttered
scenes, new instances of categorical objects, and large object pose and shape
variations, as well as its efficiency and robustness in both one-shot and
few-shot imitation learning settings. Videos and source code are available at
https://sites.google.com/view/k-vil