686 research outputs found
New control strategies for neuroprosthetic systems
The availability of techniques to artificially excite paralyzed muscles opens enormous potential for restoring both upper and lower extremity movements with\ud
neuroprostheses. Neuroprostheses must stimulate muscle, and control and regulate the artificial movements produced. Control methods to accomplish these tasks include feedforward (open-loop), feedback, and adaptive control. Feedforward control requires a great deal of information about the biomechanical behavior of the limb. For the upper extremity, an artificial motor program was developed to provide such movement program input to a neuroprosthesis. In lower extremity control, one group achieved their best results by attempting to meet naturally perceived gait objectives rather than to follow an exact joint angle trajectory. Adaptive feedforward control, as implemented in the cycleto-cycle controller, gave good compensation for the gradual decrease in performance observed with open-loop control. A neural network controller was able to control its system to customize stimulation parameters in order to generate a desired output trajectory in a given individual and to maintain tracking performance in the presence of muscle fatigue. The authors believe that practical FNS control systems must\ud
exhibit many of these features of neurophysiological systems
Locomoção de humanoides robusta e versátil baseada em controlo analĂtico e fĂsica residual
Humanoid robots are made to resemble humans but their locomotion
abilities are far from ours in terms of agility and versatility. When humans
walk on complex terrains or face external disturbances, they
combine a set of strategies, unconsciously and efficiently, to regain
stability. This thesis tackles the problem of developing a robust omnidirectional
walking framework, which is able to generate versatile
and agile locomotion on complex terrains. We designed and developed
model-based and model-free walk engines and formulated the
controllers using different approaches including classical and optimal
control schemes and validated their performance through simulations
and experiments. These frameworks have hierarchical structures that
are composed of several layers. These layers are composed of several
modules that are connected together to fade the complexity and
increase the flexibility of the proposed frameworks. Additionally, they
can be easily and quickly deployed on different platforms.
Besides, we believe that using machine learning on top of analytical approaches
is a key to open doors for humanoid robots to step out of laboratories.
We proposed a tight coupling between analytical control and
deep reinforcement learning. We augmented our analytical controller
with reinforcement learning modules to learn how to regulate the walk
engine parameters (planners and controllers) adaptively and generate
residuals to adjust the robot’s target joint positions (residual physics).
The effectiveness of the proposed frameworks was demonstrated and
evaluated across a set of challenging simulation scenarios. The robot
was able to generalize what it learned in one scenario, by displaying
human-like locomotion skills in unforeseen circumstances, even in the
presence of noise and external pushes.Os robĂ´s humanoides sĂŁo feitos para se parecerem com humanos,
mas suas habilidades de locomoção estão longe das nossas em termos
de agilidade e versatilidade. Quando os humanos caminham em
terrenos complexos ou enfrentam distĂşrbios externos combinam diferentes
estratégias, de forma inconsciente e eficiente, para recuperar a
estabilidade. Esta tese aborda o problema de desenvolver um sistema
robusto para andar de forma omnidirecional, capaz de gerar uma locomoção
para robôs humanoides versátil e ágil em terrenos complexos.
Projetámos e desenvolvemos motores de locomoção sem modelos e
baseados em modelos. Formulámos os controladores usando diferentes
abordagens, incluindo esquemas de controlo clássicos e ideais,
e validámos o seu desempenho por meio de simulações e experiências
reais. Estes frameworks têm estruturas hierárquicas compostas por
várias camadas. Essas camadas são compostas por vários módulos
que sĂŁo conectados entre si para diminuir a complexidade e aumentar
a flexibilidade dos frameworks propostos. Adicionalmente, o sistema
pode ser implementado em diferentes plataformas de forma fácil.
Acreditamos que o uso de aprendizagem automática sobre abordagens
analĂticas Ă© a chave para abrir as portas para robĂ´s humanoides
saĂrem dos laboratĂłrios. Propusemos um forte acoplamento entre controlo
analĂtico e aprendizagem profunda por reforço. Expandimos o
nosso controlador analĂtico com mĂłdulos de aprendizagem por reforço
para aprender como regular os parâmetros do motor de caminhada
(planeadores e controladores) de forma adaptativa e gerar resĂduos
para ajustar as posições das juntas alvo do robĂ´ (fĂsica residual). A
eficácia das estruturas propostas foi demonstrada e avaliada em um
conjunto de cenários de simulação desafiadores. O robô foi capaz de
generalizar o que aprendeu em um cenário, exibindo habilidades de
locomoção humanas em circunstâncias imprevistas, mesmo na presença
de ruĂdo e impulsos externos.Programa Doutoral em Informátic
A Reactive and Efficient Walking Pattern Generator for Robust Bipedal Locomotion
Available possibilities to prevent a biped robot from falling down in the
presence of severe disturbances are mainly Center of Pressure (CoP) modulation,
step location and timing adjustment, and angular momentum regulation. In this
paper, we aim at designing a walking pattern generator which employs an optimal
combination of these tools to generate robust gaits. In this approach, first,
the next step location and timing are decided consistent with the commanded
walking velocity and based on the Divergent Component of Motion (DCM)
measurement. This stage which is done by a very small-size Quadratic Program
(QP) uses the Linear Inverted Pendulum Model (LIPM) dynamics to adapt the
switching contact location and time. Then, consistent with the first stage, the
LIPM with flywheel dynamics is used to regenerate the DCM and angular momentum
trajectories at each control cycle. This is done by modulating the CoP and
Centroidal Momentum Pivot (CMP) to realize a desired DCM at the end of current
step. Simulation results show the merit of this reactive approach in generating
robust and dynamically consistent walking patterns
Implementation and Integration of Fuzzy Algorithms for Descending Stair of KMEI Humanoid Robot
Locomotion of humanoid robot depends on the mechanical characteristic of the robot. Walking on descending stairs with integrated control systems for the humanoid robot is proposed. The analysis of trajectory for descending stairs is calculated by the constrains of step length stair using fuzzy algorithm. The established humanoid robot on dynamically balance on this matter of zero moment point has been pretended to be consisting of single support phase and double support phase. Walking transition from single support phase to double support phase is needed for a smooth transition cycle. To accomplish the problem, integrated motion and controller are divided into two conditions: motion working on offline planning and controller working online walking gait generation. To solve the defect during locomotion of the humanoid robot, it is directly controlled by the fuzzy logic controller. This paper verified the simulation and the experiment for descending stair of KMEI humanoid robot. 
FC Portugal 3D Simulation Team: Team Description Paper 2020
The FC Portugal 3D team is developed upon the structure of our previous
Simulation league 2D/3D teams and our standard platform league team. Our
research concerning the robot low-level skills is focused on developing
behaviors that may be applied on real robots with minimal adaptation using
model-based approaches. Our research on high-level soccer coordination
methodologies and team playing is mainly focused on the adaptation of
previously developed methodologies from our 2D soccer teams to the 3D humanoid
environment and on creating new coordination methodologies based on the
previously developed ones. The research-oriented development of our team has
been pushing it to be one of the most competitive over the years (World
champion in 2000 and Coach Champion in 2002, European champion in 2000 and
2001, Coach 2nd place in 2003 and 2004, European champion in Rescue Simulation
and Simulation 3D in 2006, World Champion in Simulation 3D in Bremen 2006 and
European champion in 2007, 2012, 2013, 2014 and 2015). This paper describes
some of the main innovations of our 3D simulation league team during the last
years. A new generic framework for reinforcement learning tasks has also been
developed. The current research is focused on improving the above-mentioned
framework by developing new learning algorithms to optimize low-level skills,
such as running and sprinting. We are also trying to increase student contact
by providing reinforcement learning assignments to be completed using our new
framework, which exposes a simple interface without sharing low-level
implementation details
- …