2,292 research outputs found
Modification of Gesture-Determined-Dynamic Function with Consideration of Margins for Motion Planning of Humanoid Robots
The gesture-determined-dynamic function (GDDF) offers an effective way to
handle the control problems of humanoid robots. Specifically, GDDF is utilized
to constrain the movements of dual arms of humanoid robots and steer specific
gestures to conduct demanding tasks under certain conditions. However, there is
still a deficiency in this scheme. Through experiments, we found that the
joints of the dual arms, which can be regarded as the redundant manipulators,
could exceed their limits slightly at the joint angle level. The performance
straightly depends on the parameters designed beforehand for the GDDF, which
causes a lack of adaptability to the practical applications of this method. In
this paper, a modified scheme of GDDF with consideration of margins (MGDDF) is
proposed. This MGDDF scheme is based on quadratic programming (QP) framework,
which is widely applied to solving the redundancy resolution problems of robot
arms. Moreover, three margins are introduced in the proposed MGDDF scheme to
avoid joint limits. With consideration of these margins, the joints of
manipulators of the humanoid robots will not exceed their limits, and the
potential damages which might be caused by exceeding limits will be completely
avoided. Computer simulations conducted on MATLAB further verify the
feasibility and superiority of the proposed MGDDF scheme
Neural Learning of Stable Dynamical Systems based on Data-Driven Lyapunov Candidates
Neumann K, Lemme A, Steil JJ. Neural Learning of Stable Dynamical Systems based on Data-Driven Lyapunov Candidates. Presented at the Int. Conference Intelligent Robotics and Systems, Tokio
Neural Learning of Vector Fields for Encoding Stable Dynamical Systems
Lemme A, Reinhart F, Neumann K, Steil JJ. Neural Learning of Vector Fields for Encoding Stable Dynamical Systems. Neurocomputing. 2014;141:3-14
Learning to Avoid Obstacles With Minimal Intervention Control
Programming by demonstration has received much attention as it offers a general framework which allows robots to efficiently acquire novel motor skills from a human teacher. While traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important (e.g., writing or striking task), hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has been studied recently. However, an important issue which often arises in dynamical or unstructured environments is overlooked, namely how can a robot avoid obstacles? In this paper, we aim to address the problem of avoiding obstacles in the context of hybrid imitation learning. Specifically, we propose to tackle three subproblems: (i) designing a proper potential field so as to bypass obstacles, (ii) guaranteeing joint limits are respected when adjusting trajectories in the process of avoiding obstacles, and (iii) determining proper control commands for robots such that potential human-robot interaction is safe. By solving the aforementioned subproblems, the robot is capable of generalizing observed skills to new situations featuring obstacles in a feasible and safe manner. The effectiveness of the proposed method is validated through a toy example as well as a real transportation experiment on the iCub humanoid robot
Neural probabilistic motor primitives for humanoid control
We focus on the problem of learning a single motor module that can flexibly
express a range of behaviors for the control of high-dimensional physically
simulated humanoids. To do this, we propose a motor architecture that has the
general structure of an inverse model with a latent-variable bottleneck. We
show that it is possible to train this model entirely offline to compress
thousands of expert policies and learn a motor primitive embedding space. The
trained neural probabilistic motor primitive system can perform one-shot
imitation of whole-body humanoid behaviors, robustly mimicking unseen
trajectories. Additionally, we demonstrate that it is also straightforward to
train controllers to reuse the learned motor primitive space to solve tasks,
and the resulting movements are relatively naturalistic. To support the
training of our model, we compare two approaches for offline policy cloning,
including an experience efficient method which we call linear feedback policy
cloning. We encourage readers to view a supplementary video (
https://youtu.be/CaDEf-QcKwA ) summarizing our results.Comment: Accepted as a conference paper at ICLR 201
- …