810 research outputs found

    Feedback Control of an Exoskeleton for Paraplegics: Toward Robustly Stable Hands-free Dynamic Walking

    Get PDF
    This manuscript presents control of a high-DOF fully actuated lower-limb exoskeleton for paraplegic individuals. The key novelty is the ability for the user to walk without the use of crutches or other external means of stabilization. We harness the power of modern optimization techniques and supervised machine learning to develop a smooth feedback control policy that provides robust velocity regulation and perturbation rejection. Preliminary evaluation of the stability and robustness of the proposed approach is demonstrated through the Gazebo simulation environment. In addition, preliminary experimental results with (complete) paraplegic individuals are included for the previous version of the controller.Comment: Submitted to IEEE Control System Magazine. This version addresses reviewers' concerns about the robustness of the algorithm and the motivation for using such exoskeleton

    Grasping Strategy and Control Algorithm of Two Robotic Fingers Equipped with Optical Three-Axis Tactile Sensors

    Get PDF
    AbstractThis paper presents grasping strategy of robot fingers based on tactile sensing information acquired by optical three-axis tactile sensor. We developed a novel optical three-axis tactile sensor system based on an optical waveguide transduction method capable of acquiring normal and shearing forces. The sensors are mounted on fingertips of two robotic fingers. To enhance the ability of recognizing and manipulating objects, we designed the robot control system architecture comprised of connection module, thinking routines, and a hand/finger control modules. We proposed tactile sensing-based control algorithm in the robot finger control system to control fingertips movements by defining optimum grasp pressure and perform re-push movement when slippage was detected. Verification experiments were conducted whose results revealed that the finger's system managed to recognize the stiffness of unknown objects and complied with sudden changes of the object's weight during object manipulation tasks

    Advances in Bio-Inspired Robots

    Get PDF
    This book covers three major topics, specifically Biomimetic Robot Design, Mechanical System Design from Bio-Inspiration, and Bio-Inspired Analysis on A Mechanical System. The Biomimetic Robot Design part introduces research on flexible jumping robots, snake robots, and small flying robots, while the Mechanical System Design from Bio-Inspiration part introduces Bioinspired Divide-and-Conquer Design Methodology, Modular Cable-Driven Human-Like Robotic Arm andWall-Climbing Robot. Finally, in the Bio-Inspired Analysis on A Mechanical System part, research contents on the control strategy of Surgical Assistant Robot, modeling of Underwater Thruster, and optimization of Humanoid Robot are introduced

    Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual

    Get PDF
    Humanoid robots are made to resemble humans but their locomotion abilities are far from ours in terms of agility and versatility. When humans walk on complex terrains or face external disturbances, they combine a set of strategies, unconsciously and efficiently, to regain stability. This thesis tackles the problem of developing a robust omnidirectional walking framework, which is able to generate versatile and agile locomotion on complex terrains. We designed and developed model-based and model-free walk engines and formulated the controllers using different approaches including classical and optimal control schemes and validated their performance through simulations and experiments. These frameworks have hierarchical structures that are composed of several layers. These layers are composed of several modules that are connected together to fade the complexity and increase the flexibility of the proposed frameworks. Additionally, they can be easily and quickly deployed on different platforms. Besides, we believe that using machine learning on top of analytical approaches is a key to open doors for humanoid robots to step out of laboratories. We proposed a tight coupling between analytical control and deep reinforcement learning. We augmented our analytical controller with reinforcement learning modules to learn how to regulate the walk engine parameters (planners and controllers) adaptively and generate residuals to adjust the robot’s target joint positions (residual physics). The effectiveness of the proposed frameworks was demonstrated and evaluated across a set of challenging simulation scenarios. The robot was able to generalize what it learned in one scenario, by displaying human-like locomotion skills in unforeseen circumstances, even in the presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos, mas suas habilidades de locomoção estão longe das nossas em termos de agilidade e versatilidade. Quando os humanos caminham em terrenos complexos ou enfrentam distúrbios externos combinam diferentes estratégias, de forma inconsciente e eficiente, para recuperar a estabilidade. Esta tese aborda o problema de desenvolver um sistema robusto para andar de forma omnidirecional, capaz de gerar uma locomoção para robôs humanoides versátil e ágil em terrenos complexos. Projetámos e desenvolvemos motores de locomoção sem modelos e baseados em modelos. Formulámos os controladores usando diferentes abordagens, incluindo esquemas de controlo clássicos e ideais, e validámos o seu desempenho por meio de simulações e experiências reais. Estes frameworks têm estruturas hierárquicas compostas por várias camadas. Essas camadas são compostas por vários módulos que são conectados entre si para diminuir a complexidade e aumentar a flexibilidade dos frameworks propostos. Adicionalmente, o sistema pode ser implementado em diferentes plataformas de forma fácil. Acreditamos que o uso de aprendizagem automática sobre abordagens analíticas é a chave para abrir as portas para robôs humanoides saírem dos laboratórios. Propusemos um forte acoplamento entre controlo analítico e aprendizagem profunda por reforço. Expandimos o nosso controlador analítico com módulos de aprendizagem por reforço para aprender como regular os parâmetros do motor de caminhada (planeadores e controladores) de forma adaptativa e gerar resíduos para ajustar as posições das juntas alvo do robô (física residual). A eficácia das estruturas propostas foi demonstrada e avaliada em um conjunto de cenários de simulação desafiadores. O robô foi capaz de generalizar o que aprendeu em um cenário, exibindo habilidades de locomoção humanas em circunstâncias imprevistas, mesmo na presença de ruído e impulsos externos.Programa Doutoral em Informátic

    Muecas: a multi-sensor robotic head for affective human robot interaction and imitation

    Get PDF
    Este artículo presenta una cabeza robótica humanoide multi-sensor para la interacción del robot humano. El diseño de la cabeza robótica, Muecas, se basa en la investigación en curso sobre los mecanismos de percepción e imitación de las expresiones y emociones humanas. Estos mecanismos permiten la interacción directa entre el robot y su compañero humano a través de las diferentes modalidades del lenguaje natural: habla, lenguaje corporal y expresiones faciales. La cabeza robótica tiene 12 grados de libertad, en una configuración de tipo humano, incluyendo ojos, cejas, boca y cuello, y ha sido diseñada y construida totalmente por IADeX (Ingeniería, Automatización y Diseño de Extremadura) y RoboLab. Se proporciona una descripción detallada de su cinemática junto con el diseño de los controladores más complejos. Muecas puede ser controlado directamente por FACS (Sistema de Codificación de Acción Facial), el estándar de facto para reconocimiento y síntesis de expresión facial. Esta característica facilita su uso por parte de plataformas de terceros y fomenta el desarrollo de la imitación y de los sistemas basados en objetivos. Los sistemas de imitación aprenden del usuario, mientras que los basados en objetivos utilizan técnicas de planificación para conducir al usuario hacia un estado final deseado. Para mostrar la flexibilidad y fiabilidad de la cabeza robótica, se presenta una arquitectura de software capaz de detectar, reconocer, clasificar y generar expresiones faciales en tiempo real utilizando FACS. Este sistema se ha implementado utilizando la estructura robótica, RoboComp, que proporciona acceso independiente al hardware a los sensores en la cabeza. Finalmente, se presentan resultados experimentales que muestran el funcionamiento en tiempo real de todo el sistema, incluyendo el reconocimiento y la imitación de las expresiones faciales humanas.This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.Trabajo financiado por: Ministerio de Ciencia e Innovación. Proyecto TIN2012-38079-C03-1 Gobierno de Extremadura. Proyecto GR10144peerReviewe

    Transfert de Mouvement Humain vers Robot Humanoïde

    Get PDF
    Le but de cette thèse est le transfert du mouvement humain vers un robot humanoïde en ligne. Dans une première partie, le mouvement humain, enregistré par un système de capture de mouvement, est analysé pour extraire des caractéristiques qui doivent être transférées vers le robot humanoïde. Dans un deuxième temps, le mouvement du robot qui comprend ces caractéristiques est calculé en utilisant la cinématique inverse avec priorité. L'ensemble des tâches avec leurs priorités est ainsi transféré. La méthode permet une reproduction du mouvement la plus fidèle possible, en ligne et pour le haut du corps. Finalement, nous étudions le problème du transfert mouvement des pieds. Pour cette étude, le mouvement des pieds est analysé pour extraire les trajectoires euclidiennes qui sont adaptées au robot. Les trajectoires du centre du masse qui garantit que le robot ne tombe pas sont calculées `a partir de la position des pieds et du modèle du pendule inverse. Il est ainsi possible réaliser une imitation complète incluant les mouvements du haut du corps ainsi que les mouvements des pieds. ABSTRACT : The aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motio
    • …
    corecore