249 research outputs found
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
An implementation of vision based deep reinforcement learning for humanoid robot locomotion
An implementation of vision based deep reinforcement learning for humanoid robot locomotionTÜBİTAK ve NVIDI
An implementation of vision based deep reinforcement learning for humanoid robot locomotion
Deep reinforcement learning (DRL) exhibits a
promising approach for controlling humanoid robot
locomotion. However, only values relating sensors such as IMU,
gyroscope, and GPS are not sufficient robots to learn their
locomotion skills. In this article, we aim to show the success of
vision based DRL. We propose a new vision based deep
reinforcement learning algorithm for the locomotion of the
Robotis-op2 humanoid robot for the first time. In experimental
setup, we construct the locomotion of humanoid robot in a
specific environment in the Webots software. We use Double
Dueling Q Networks (D3QN) and Deep Q Networks (DQN) that
are a kind of reinforcement learning algorithm. We present the
performance of vision based DRL algorithm on a locomotion
experiment. The experimental results show that D3QN is better
than DQN in that stable locomotion and fast training and the
vision based DRL algorithms will be successfully able to use at
the other complex environments and applications.TÜBİTAK ve NVIDI
From Knowing to Doing: Learning Diverse Motor Skills through Instruction Learning
Recent years have witnessed many successful trials in the robot learning
field. For contact-rich robotic tasks, it is challenging to learn coordinated
motor skills by reinforcement learning. Imitation learning solves this problem
by using a mimic reward to encourage the robot to track a given reference
trajectory. However, imitation learning is not so efficient and may constrain
the learned motion. In this paper, we propose instruction learning, which is
inspired by the human learning process and is highly efficient, flexible, and
versatile for robot motion learning. Instead of using a reference signal in the
reward, instruction learning applies a reference signal directly as a
feedforward action, and it is combined with a feedback action learned by
reinforcement learning to control the robot. Besides, we propose the action
bounding technique and remove the mimic reward, which is shown to be crucial
for efficient and flexible learning. We compare the performance of instruction
learning with imitation learning, indicating that instruction learning can
greatly speed up the training process and guarantee learning the desired motion
correctly. The effectiveness of instruction learning is validated through a
bunch of motion learning examples for a biped robot and a quadruped robot,
where skills can be learned typically within several million steps. Besides, we
also conduct sim-to-real transfer and online learning experiments on a real
quadruped robot. Instruction learning has shown great merits and potential,
making it a promising alternative for imitation learning
Learning natural locomotion behaviors for humanoid robots using human bias
This paper presents a new learning framework that leverages the knowledge
from imitation learning, deep reinforcement learning, and control theories to
achieve human-style locomotion that is natural, dynamic, and robust for
humanoids. We proposed novel approaches to introduce human bias, i.e. motion
capture data and a special Multi-Expert network structure. We used the
Multi-Expert network structure to smoothly blend behavioral features, and used
the augmented reward design for the task and imitation rewards. Our reward
design is composable, tunable, and explainable by using fundamental concepts
from conventional humanoid control. We rigorously validated and benchmarked the
learning framework which consistently produced robust locomotion behaviors in
various test scenarios. Further, we demonstrated the capability of learning
robust and versatile policies in the presence of disturbances, such as terrain
irregularities and external pushes.Comment: university polic
- …