190 research outputs found
Adaptive, fast walking in a biped robot under neuronal control and learning
Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
Hierarchical Control for Bipedal Locomotion using Central Pattern Generators and Neural Networks
The complexity of bipedal locomotion may be attributed to the difficulty in
synchronizing joint movements while at the same time achieving high-level
objectives such as walking in a particular direction. Artificial central
pattern generators (CPGs) can produce synchronized joint movements and have
been used in the past for bipedal locomotion. However, most existing CPG-based
approaches do not address the problem of high-level control explicitly. We
propose a novel hierarchical control mechanism for bipedal locomotion where an
optimized CPG network is used for joint control and a neural network acts as a
high-level controller for modulating the CPG network. By separating motion
generation from motion modulation, the high-level controller does not need to
control individual joints directly but instead can develop to achieve a higher
goal using a low-dimensional control signal. The feasibility of the
hierarchical controller is demonstrated through simulation experiments using
the Neuro-Inspired Companion (NICO) robot. Experimental results demonstrate the
controller's ability to function even without the availability of an exact
robot model.Comment: In: Proceedings of the Joint IEEE International Conference on
Development and Learning and on Epigenetic Robotics (ICDL-EpiRob), Oslo,
Norway, Aug. 19-22, 201
Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual
Humanoid robots are made to resemble humans but their locomotion
abilities are far from ours in terms of agility and versatility. When humans
walk on complex terrains or face external disturbances, they
combine a set of strategies, unconsciously and efficiently, to regain
stability. This thesis tackles the problem of developing a robust omnidirectional
walking framework, which is able to generate versatile
and agile locomotion on complex terrains. We designed and developed
model-based and model-free walk engines and formulated the
controllers using different approaches including classical and optimal
control schemes and validated their performance through simulations
and experiments. These frameworks have hierarchical structures that
are composed of several layers. These layers are composed of several
modules that are connected together to fade the complexity and
increase the flexibility of the proposed frameworks. Additionally, they
can be easily and quickly deployed on different platforms.
Besides, we believe that using machine learning on top of analytical approaches
is a key to open doors for humanoid robots to step out of laboratories.
We proposed a tight coupling between analytical control and
deep reinforcement learning. We augmented our analytical controller
with reinforcement learning modules to learn how to regulate the walk
engine parameters (planners and controllers) adaptively and generate
residuals to adjust the robot’s target joint positions (residual physics).
The effectiveness of the proposed frameworks was demonstrated and
evaluated across a set of challenging simulation scenarios. The robot
was able to generalize what it learned in one scenario, by displaying
human-like locomotion skills in unforeseen circumstances, even in the
presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos,
mas suas habilidades de locomoção estão longe das nossas em termos
de agilidade e versatilidade. Quando os humanos caminham em
terrenos complexos ou enfrentam distúrbios externos combinam diferentes
estratégias, de forma inconsciente e eficiente, para recuperar a
estabilidade. Esta tese aborda o problema de desenvolver um sistema
robusto para andar de forma omnidirecional, capaz de gerar uma locomoção
para robôs humanoides versátil e ágil em terrenos complexos.
Projetámos e desenvolvemos motores de locomoção sem modelos e
baseados em modelos. Formulámos os controladores usando diferentes
abordagens, incluindo esquemas de controlo clássicos e ideais,
e validámos o seu desempenho por meio de simulações e experiências
reais. Estes frameworks têm estruturas hierárquicas compostas por
várias camadas. Essas camadas são compostas por vários módulos
que são conectados entre si para diminuir a complexidade e aumentar
a flexibilidade dos frameworks propostos. Adicionalmente, o sistema
pode ser implementado em diferentes plataformas de forma fácil.
Acreditamos que o uso de aprendizagem automática sobre abordagens
analíticas é a chave para abrir as portas para robôs humanoides
saírem dos laboratórios. Propusemos um forte acoplamento entre controlo
analítico e aprendizagem profunda por reforço. Expandimos o
nosso controlador analítico com módulos de aprendizagem por reforço
para aprender como regular os parâmetros do motor de caminhada
(planeadores e controladores) de forma adaptativa e gerar resíduos
para ajustar as posições das juntas alvo do robô (física residual). A
eficácia das estruturas propostas foi demonstrada e avaliada em um
conjunto de cenários de simulação desafiadores. O robô foi capaz de
generalizar o que aprendeu em um cenário, exibindo habilidades de
locomoção humanas em circunstâncias imprevistas, mesmo na presença
de ruído e impulsos externos.Programa Doutoral em Informátic
Fast biped walking with a neuronal controller and physical computation
Biped walking remains a difficult problem and robot models can
greatly {facilitate} our understanding of the underlying
biomechanical principles as well as their neuronal control. The
goal of this study is to specifically demonstrate that stable
biped walking can be achieved by combining the physical properties
of the walking robot with a small, reflex-based neuronal network,
which is governed mainly by local sensor signals. This study shows
that human-like gaits emerge without {specific} position or
trajectory control and that the walker is able to compensate small
disturbances through its own dynamical properties. The reflexive
controller used here has the following characteristics, which are
different from earlier approaches: (1) Control is mainly local.
Hence, it uses only two signals (AEA=Anterior Extreme Angle and
GC=Ground Contact) which operate at the inter-joint level. All
other signals operate only at single joints. (2) Neither position
control nor trajectory tracking control is used. Instead, the
approximate nature of the local reflexes on each joint allows the
robot mechanics itself (e.g., its passive dynamics) to contribute
substantially to the overall gait trajectory computation. (3) The
motor control scheme used in the local reflexes of our robot is
more straightforward and has more biological plausibility than
that of other robots, because the outputs of the motorneurons in
our reflexive controller are directly driving the motors of the
joints, rather than working as references for position or velocity
control. As a consequence, the neural controller and the robot
mechanics are closely coupled as a neuro-mechanical system and
this study emphasises that dynamically stable biped walking gaits
emerge from the coupling between neural computation and physical
computation. This is demonstrated by different walking
experiments using two real robot as well as by a Poincar\'{e} map
analysis applied on a model of the robot in order to assess its
stability. In addition, this neuronal control structure allows the
use of a policy gradient reinforcement learning algorithm to tune
the parameters of the neurons in real-time, during walking. This
way the robot can reach a record-breaking walking speed of 3.5
leg-lengths per second after only a few minutes of online
learning, which is even comparable to the fastest relative speed
of human walking
Chaotic exploration and learning of locomotion behaviours
We present a general and fully dynamic neural system, which exploits intrinsic chaotic dynamics, for the real-time goal-directed exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modeled as a network of neural oscillators that are initially coupled only through physical embodiment, and goal-directed exploration of coordinated motor patterns is achieved by chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organized dynamics, each of which is a candidate for a locomotion behavior. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states, using its intrinsic chaotic dynamics as a driving force, and stabilizes on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced, which results in an increased diversity of motor outputs, thus achieving multiscale exploration. A rhythmic pattern discovered by this process is memorized and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronization method. Our results show that the novel neurorobotic system is able to create and learn multiple locomotion behaviors for a wide range of body configurations and physical environments and can readapt in realtime after sustaining damage
- …