6 research outputs found

    Human-In-The-Loop Control and Task Learning for Pneumatically Actuated Muscle Based Robots

    Get PDF
    Pneumatically actuated muscles (PAMs) provide a low cost, lightweight, and high power-to-weight ratio solution for many robotic applications. In addition, the antagonist pair configuration for robotic arms make it open to biologically inspired control approaches. In spite of these advantages, they have not been widely adopted in human-in-the-loop control and learning applications. In this study, we propose a biologically inspired multimodal human-in-the-loop control system for driving a one degree-of-freedom robot, and realize the task of hammering a nail into a wood block under human control. We analyze the human sensorimotor learning in this system through a set of experiments, and show that effective autonomous hammering skill can be readily obtained through the developed human-robot interface. The results indicate that a human-in-the-loop learning setup with anthropomorphically valid multi-modal human-robot interface leads to fast learning, thus can be used to effectively derive autonomous robot skills for ballistic motor tasks that require modulation of impedance

    Dyadic behavior in co-manipulation :from humans to robots

    Get PDF
    To both decrease the physical toll on a human worker, and increase a robot’s environment perception, a human-robot dyad may be used to co-manipulate a shared object. From the premise that humans are efficient working together, this work’s approach is to investigate human-human dyads co-manipulating an object. The co-manipulation is evaluated from motion capture data, surface electromyography (EMG) sensors, and custom contact sensors for qualitative performance analysis. A human-human dyadic co-manipulation experiment is designed in which every human is instructed to behave as a leader, as a follower or neither, acting as naturally as possible. The experiment data analysis revealed that humans modulate their arm mechanical impedance depending on their role during the co-manipulation. In order to emulate the human behavior during a co-manipulation task, an admittance controller with varying stiffness is presented. The desired stiffness is continuously varied based on a scalar and smooth function that assigns a degree of leadership to the robot. Furthermore, the controller is analyzed through simulations, its stability is analyzed by Lyapunov. The resulting object trajectories greatly resemble the patterns seen in the human-human dyad experiment.Para tanto diminuir o esforço físico de um humano, quanto aumentar a percepção de um ambiente por um robô, um díade humano-robô pode ser usado para co-manipulação de um objeto compartilhado. Partindo da premissa de que humanos são eficientes trabalhando juntos, a abordagem deste trabalho é a de investigar díades humano-humano co-manipulando um objeto compartilhado. A co-manipulação é avaliada a partir de dados de um sistema de captura de movimentos, sinais de eletromiografia (EMG), e de sensores de contato customizados para análise qualitativa de desempenho. Um experimento de co-manipulação com díades humano-humano foi projetado no qual cada humano é instruído a se comportar como um líder, um seguidor, ou simplesmente agir tão naturalmente quanto possível. A análise de dados do experimento revelou que os humanos modulam a rigidez mecânica do braço a depender de que tipo de comportamento eles foram designados antes da co-manipulação. Para emular o comportamento humano durante uma tarefa de co-manipulação, um controle por admitância com rigidez variável é apresentado neste trabalho. A rigidez desejada é continuamente variada com base em uma função escalar suave que define o grau de liderança do robô. Além disso, o controlador é analisado por meio de simulações, e sua estabilidade é analisada pela teoria de Lyapunov. As trajetórias resultantes do uso do controlador mostraram um padrão de comportamento muito parecido ao do experimento com díades humano-humano

    Development of a methodology for the human-robot interaction based on vision systems for collaborative robotics

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Towards Skill Transfer via Learning-Based Guidance in Human-Robot Interaction

    Get PDF
    This thesis presents learning-based guidance (LbG) approaches that aim to transfer skills from human to robot. The approaches capture the temporal and spatial information of human motions and teach robot to assist human in human-robot collaborative tasks. In such physical human-robot interaction (pHRI) environments, learning from demonstrations (LfD) enables this transferring skill. Demonstrations can be provided through kinesthetic teaching and/or teleoperation. In kinesthetic teaching, humans directly guide robot’s body to perform a task while in teleoperation, demonstrations can be done through motion/vision-based systems or haptic devices. In this work, the LbG approaches are developed through kinesthetic teaching and teleoperation in both virtual and physical environments. First, this thesis compares and analyzes the capability of two types of statistical models, generative and discriminative, to generate haptic guidance (HG) forces as well as segment and recognize gestures for pHRI that can be used in virtual minimally invasive surgery (MIS) training. In this learning-based approach, the knowledge and experience of experts are modeled to improve the unpredictable motions of novice trainees. Two statistical models, hidden Markov model (HMM) and hidden Conditional Random Fields (HCRF), are used to learn gestures from demonstrations in a virtual MIS related task. The models are developed to automatically recognize and segment gestures as well as generate guidance forces. In practice phase, the guidance forces are adaptively calculated in real time regarding gesture similarities among user motion and the gesture models. Both statistical models can successfully capture the gestures of the user and provide adaptive HG, however, results show the superiority of HCRF, as a discriminative method, compared to HMM, as a generative method, in terms of user performance. In addition, LbG approaches are developed for kinesthetic HRI simulations that aim to transfer the skills of expert surgeons to resident trainees. The discriminative nature of HCRF is incorporated into the approach to produce LbG forces and discriminate the skill levels of users. To experimentally evaluate this kinesthetic-based approach, a femur bone drilling simulation is developed in which residents are provided haptic feedback based on real computed tomography (CT) data that enable them to feel the variable stiffness of bone layers. Orthepaedic surgeons require to adjust drilling force since bone layers have different stiffness. In the learning phase, using the simulation, an expert HCRF model is trained from expert surgeons demonstration to learn the stiffness variations of different bone layers. A novice HCRF model is also developed from the demonstration of novice residents to discriminate the skill levels of a new trainee. During the practice phase, the learning-based approach, which encoded the stiffness variations, guides the trainees to perform training tasks similar to experts motions. Finally, in contrast to other parts of the thesis, an LbG approach is developed through teleoperation in physical environment. The approach assists operators to navigate a teleoperated robot through a haptic steering wheel and a haptic gas pedal. A set of expert operator demonstrations are used to develop maneuvering skill model. The temporal and spatial variation of demonstrations are learned using HMM as the skill model. A modified Gaussian Mixture regression (GMR) in combination with the HMM is also developed to robustly produce the motion during reproduction. The GMR calculates outcome motions from a joint probability density function of data rather than directly model the regression function. In addition, the distance between the robot and obstacles is incorporated into the impedance control to generate guidance forces that also assist operators with avoiding obstacle collisions. Using different forms of variable impedance control, guidance forces are computed in real time with respect to the similarities between the maneuver of users and the skill model. This encourages users to navigate a robot similar to the expert operators. The results show that user performance is improved in terms of number of collisions, task completion time, and average closeness to obstacles

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thèse vise à étudier des systèmes et des outils pour la télé-opération d’un robot humanoïde.La téléopération de robots est cruciale pour envoyer et contrôler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scénarios d’interventionen cas de catastrophe, des environnements contaminés ou des sites extraterrestres). Le terme téléopérationdésigne le plus souvent le contrôle direct et continu d’un robot. Dans ce cas, l’opérateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrôle. L’un des principaux défis est de contrôler le robot de manière à garantir son équilibredynamique tout en essayant de suivre les références humaines. De plus, l’opérateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs à distance afind’appréhender la situation ou de se sentir physiquement présent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le réseau de communicationn’est pas idéal. Dans ce cas, les commandes de l’homme au robot ainsi que la rétroaction du robotà l’homme peuvent être retardées. Ces délais peuvent être très gênants pour l’opérateur humain,qui ne peut pas télé-opérer efficacement son avatar robotique.Un autre point crucial à considérer lors de la mise en place d’un système de télé-opérationest le grand nombre de paramètres qui doivent être réglés pour contrôler efficacement les robotstélé-opérés. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventêtre utilisés pour automatiser l’apprentissage de certains paramètres.Dans cette thèse, nous avons proposé un système de télé-opération qui a été testé sur le robothumanoïde iCub. Nous avons utilisé une combinaison de capture de mouvement basée sur latechnologie inertielle comme périphérique de contrôle pour l’humanoïde et un casque de réalitévirtuelle connecté aux caméras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques équivalents en développant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilité du mouvement transféré. Nous avons ensuite implémenté un contrôleur du corps entierpour permettre au robot de suivre le mouvement humain reciblé. Le contrôleur a ensuite étéoptimisé en simulation pour obtenir un bon suivi des mouvements de référence du corps entier,en recourant à un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot réel en quelques essais.Pour télé-opérer les mouvements de marche, nous avons implémenté un mode de télé-opérationde niveau supérieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde référence au robot. Nous avons intégré ce paramètre dans le système de télé-opération, ce quipermet à l’utilisateur de basculer entre les deux modes différents.Un problème majeur empêchant le déploiement de tels systèmes dans des applications réellesest la présence de retards de communication entre l’entrée humaine et le retour du robot: mêmequelques centaines de millisecondes de retard peuvent irrémédiablement perturber l’opérateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un système danslequel un robot humanoïde exécute des commandes avant de les recevoir, de sorte que le retourvisuel semble être synchronisé avec l’opérateur, alors que le robot exécutait les commandes dansle passé. Pour ce faire, le robot prédit en permanence les commandes futures en interrogeant unmodèle d’apprentissage automatique formé sur les trajectoires passées et conditionné aux dernièrescommandes reçues
    corecore