54 research outputs found
Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual
Humanoid robots are made to resemble humans but their locomotion
abilities are far from ours in terms of agility and versatility. When humans
walk on complex terrains or face external disturbances, they
combine a set of strategies, unconsciously and efficiently, to regain
stability. This thesis tackles the problem of developing a robust omnidirectional
walking framework, which is able to generate versatile
and agile locomotion on complex terrains. We designed and developed
model-based and model-free walk engines and formulated the
controllers using different approaches including classical and optimal
control schemes and validated their performance through simulations
and experiments. These frameworks have hierarchical structures that
are composed of several layers. These layers are composed of several
modules that are connected together to fade the complexity and
increase the flexibility of the proposed frameworks. Additionally, they
can be easily and quickly deployed on different platforms.
Besides, we believe that using machine learning on top of analytical approaches
is a key to open doors for humanoid robots to step out of laboratories.
We proposed a tight coupling between analytical control and
deep reinforcement learning. We augmented our analytical controller
with reinforcement learning modules to learn how to regulate the walk
engine parameters (planners and controllers) adaptively and generate
residuals to adjust the robot’s target joint positions (residual physics).
The effectiveness of the proposed frameworks was demonstrated and
evaluated across a set of challenging simulation scenarios. The robot
was able to generalize what it learned in one scenario, by displaying
human-like locomotion skills in unforeseen circumstances, even in the
presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos,
mas suas habilidades de locomoção estão longe das nossas em termos
de agilidade e versatilidade. Quando os humanos caminham em
terrenos complexos ou enfrentam distúrbios externos combinam diferentes
estratégias, de forma inconsciente e eficiente, para recuperar a
estabilidade. Esta tese aborda o problema de desenvolver um sistema
robusto para andar de forma omnidirecional, capaz de gerar uma locomoção
para robôs humanoides versátil e ágil em terrenos complexos.
Projetámos e desenvolvemos motores de locomoção sem modelos e
baseados em modelos. Formulámos os controladores usando diferentes
abordagens, incluindo esquemas de controlo clássicos e ideais,
e validámos o seu desempenho por meio de simulações e experiências
reais. Estes frameworks têm estruturas hierárquicas compostas por
várias camadas. Essas camadas são compostas por vários módulos
que são conectados entre si para diminuir a complexidade e aumentar
a flexibilidade dos frameworks propostos. Adicionalmente, o sistema
pode ser implementado em diferentes plataformas de forma fácil.
Acreditamos que o uso de aprendizagem automática sobre abordagens
analíticas é a chave para abrir as portas para robôs humanoides
saírem dos laboratórios. Propusemos um forte acoplamento entre controlo
analítico e aprendizagem profunda por reforço. Expandimos o
nosso controlador analítico com módulos de aprendizagem por reforço
para aprender como regular os parâmetros do motor de caminhada
(planeadores e controladores) de forma adaptativa e gerar resíduos
para ajustar as posições das juntas alvo do robô (física residual). A
eficácia das estruturas propostas foi demonstrada e avaliada em um
conjunto de cenários de simulação desafiadores. O robô foi capaz de
generalizar o que aprendeu em um cenário, exibindo habilidades de
locomoção humanas em circunstâncias imprevistas, mesmo na presença
de ruído e impulsos externos.Programa Doutoral em Informátic
A Modular Framework to Generate Robust Biped Locomotion: From Planning to Control
Biped robots are inherently unstable because of their complex kinematics as
well as dynamics. Despite the many research efforts in developing biped
locomotion, the performance of biped locomotion is still far from the
expectations. This paper proposes a model-based framework to generate stable
biped locomotion. The core of this framework is an abstract dynamics model
which is composed of three masses to consider the dynamics of stance leg, torso
and swing leg for minimizing the tracking problems. According to this dynamics
model, we propose a modular walking reference trajectories planner which takes
into account obstacles to plan all the references. Moreover, this dynamics
model is used to formulate the controller as a Model Predictive Control (MPC)
scheme which can consider some constraints in the states of the system, inputs,
outputs and also mixed input-output. The performance and the robustness of the
proposed framework are validated by performing several numerical simulations
using MATLAB. Moreover, the framework is deployed on a simulated
torque-controlled humanoid to verify its performance and robustness. The
simulation results show that the proposed framework is capable of generating
biped locomotion robustly
Learning hybrid locomotion skills—Learn to exploit residual actions and modulate model-based gait control
This work has developed a hybrid framework that combines machine learning and control approaches for legged robots to achieve new capabilities of balancing against external perturbations. The framework embeds a kernel which is a model-based, full parametric closed-loop and analytical controller as the gait pattern generator. On top of that, a neural network with symmetric partial data augmentation learns to automatically adjust the parameters for the gait kernel, and also generate compensatory actions for all joints, thus significantly augmenting the stability under unexpected perturbations. Seven Neural Network policies with different configurations were optimized to validate the effectiveness and the combined use of the modulation of the kernel parameters and the compensation for the arms and legs using residual actions. The results validated that modulating kernel parameters alongside the residual actions have improved the stability significantly. Furthermore, The performance of the proposed framework was evaluated across a set of challenging simulated scenarios, and demonstrated considerable improvements compared to the baseline in recovering from large external forces (up to 118%). Besides, regarding measurement noise and model inaccuracies, the robustness of the proposed framework has been assessed through simulations, which demonstrated the robustness in the presence of these uncertainties. Furthermore, the trained policies were validated across a set of unseen scenarios and showed the generalization to dynamic walking
Disturbance rejection for legged robots through a hybrid observer
A legged robot needs to move in unstructured environments continuously subject to disturbances. Existing disturbance observers are not enough when significant forces act on both the center of mass and the robot’s legs, and they usually employ indirect measures of the floating base’s velocity. This paper presents a solution combining a momentum-based observer for the angular term and an acceleration-based observer for the translational one, employing directly measurable values from the sensors. Due to this combination, we define this observer as ”hybrid,” and it can detect disturbances acting on both the legged robot’s center of mass and its legs. The estimation is employed in a whole-body controller. The framework is tested in simulation on a quadruped robot subject to significant disturbances, and it is compared with existing observer-based techniques
Whole-body control with disturbance rejection through a momentum-based observer for quadruped robots☆
This paper presents an estimator of external disturbances for legged robots, based on the system’s momentum. The estimator, along with a suitable motion planner for the trajectory of the robot’s center of mass and an optimization problem based on the modulation of ground reaction forces, devises a whole-body controller for the robot. The designed solution is tested on a quadruped robot within a dynamic simulation environment. The quadruped is stressed by external disturbances acting on stance and swing legs indifferently. The proposed approach is also evaluated through a comparison with two state-of-the-art solutions
Learning-based methods for planning and control of humanoid robots
Nowadays, humans and robots are more and more likely to coexist as time goes by. The anthropomorphic nature of humanoid robots facilitates physical human-robot interaction, and makes social human-robot interaction more natural. Moreover, it makes humanoids ideal candidates for many applications related to tasks and environments designed for humans.
No matter the application, an ubiquitous requirement for the humanoid is to possess proper locomotion skills. Despite long-lasting research, humanoid locomotion is still far from being a trivial task. A common approach to address humanoid locomotion consists in decomposing its complexity by means of a model-based hierarchical control architecture. To cope with computational constraints, simplified models for the humanoid are employed in some of the architectural layers. At the same time, the redundancy of the humanoid with respect to the locomotion task as well as the closeness of such a task to human locomotion suggest a data-driven approach to learn it directly from experience.
This thesis investigates the application of learning-based techniques to planning and control of humanoid locomotion. In particular, both deep reinforcement learning and deep supervised learning are considered to address humanoid locomotion tasks in a crescendo of complexity.
First, we employ deep reinforcement learning to study the spontaneous emergence of balancing and push recovery strategies for the humanoid, which represent essential prerequisites for more complex locomotion tasks.
Then, by making use of motion capture data collected from human subjects, we employ deep supervised learning to shape the robot walking trajectories towards an improved human-likeness.
The proposed approaches are validated on real and simulated humanoid robots. Specifically, on two versions of the iCub humanoid: iCub v2.7 and iCub v3
- …