2 research outputs found
Towards Orientation Learning and Adaptation in Cartesian Space
As a promising branch of robotics, imitation learning emerges as an important way to transfer human skills to robots, where human demonstrations represented in Cartesian or joint spaces are utilized to estimate task/skill models that can be subsequently generalized to new situations. While learning Cartesian positions suffices for many applications, the end-effector orientation is required in many others. Despite recent advances in learning orientations from demonstrations, several crucial issues have not been adequately addressed yet. For instance, how can demonstrated orientations be adapted to pass through arbitrary desired points that comprise orientations and angular velocities? In this article, we propose an approach that is capable of learning multiple orientation trajectories and adapting learned orientation skills to new situations (e.g., via-points and end-points), where both orientation and angular velocity are considered. Specifically, we introduce a kernelized treatment to alleviate explicit basis functions when learning orientations, which allows for learning orientation trajectories associated with high-dimensional inputs. In addition, we extend our approach to the learning of quaternions with angular acceleration or jerk constraints, which allows for generating smoother orientation profiles for robots. Several examples including experiments with real 7-DoF robot arms are provided to verify the effectiveness of our method
Towards Orientation Learning and Adaptation in Cartesian Space
As a promising branch of robotics, imitation learning emerges as an important
way to transfer human skills to robots, where human demonstrations represented
in Cartesian or joint spaces are utilized to estimate task/skill models that
can be subsequently generalized to new situations. While learning Cartesian
positions suffices for many applications, the end-effector orientation is
required in many others. Despite recent advances in learning orientations from
demonstrations, several crucial issues have not been adequately addressed yet.
For instance, how can demonstrated orientations be adapted to pass through
arbitrary desired points that comprise orientations and angular velocities? In
this paper, we propose an approach that is capable of learning multiple
orientation trajectories and adapting learned orientation skills to new
situations (e.g., via-points and end-points), where both orientation and
angular velocity are considered. Specifically, we introduce a kernelized
treatment to alleviate explicit basis functions when learning orientations,
which allows for learning orientation trajectories associated with
high-dimensional inputs. In addition, we extend our approach to the learning of
quaternions with angular acceleration or jerk constraints, which allows for
generating smoother orientation profiles for robots. Several examples including
experiments with real 7-DoF robot arms are provided to verify the effectiveness
of our method.Comment: Accepted for publication at IEEE Transactions on Robotic