188 research outputs found

    Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control

    Get PDF
    As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task

    Human-Inspired Balancing and Recovery Stepping for Humanoid Robots

    Get PDF
    Robustly maintaining balance on two legs is an important challenge for humanoid robots. The work presented in this book represents a contribution to this area. It investigates efficient methods for the decision-making from internal sensors about whether and where to step, several improvements to efficient whole-body postural balancing methods, and proposes and evaluates a novel method for efficient recovery step generation, leveraging human examples and simulation-based reinforcement learning

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Methods to improve the coping capacities of whole-body controllers for humanoid robots

    Get PDF
    Current applications for humanoid robotics require autonomy in an environment specifically adapted to humans, and safe coexistence with people. Whole-body control is promising in this sense, having shown to successfully achieve locomotion and manipulation tasks. However, robustness remains an issue: whole-body controllers can still hardly cope with unexpected disturbances, with changes in working conditions, or with performing a variety of tasks, without human intervention. In this thesis, we explore how whole-body control approaches can be designed to address these issues. Based on whole-body control, contributions have been developed along three main axes: joint limit avoidance, automatic parameter tuning, and generalizing whole-body motions achieved by a controller. We first establish a whole-body torque-controller for the iCub, based on the stack-of-tasks approach and proposed feedback control laws in SE(3). From there, we develop a novel, theoretically guaranteed joint limit avoidance technique for torque-control, through a parametrization of the feasible joint space. This technique allows the robot to remain compliant, while resisting external perturbations that push joints closer to their limits, as demonstrated with experiments in simulation and with the real robot. Then, we focus on the issue of automatically tuning parameters of the controller, in order to improve its behavior across different situations. We show that our approach for learning task priorities, combining domain randomization and carefully selected fitness functions, allows the successful transfer of results between platforms subjected to different working conditions. Following these results, we then propose a controller which allows for generic, complex whole-body motions through real-time teleoperation. This approach is notably verified on the robot to follow generic movements of the teleoperator while in double support, as well as to follow the teleoperator\u2019s upper-body movements while walking with footsteps adapted from the teleoperator\u2019s footsteps. The approaches proposed in this thesis therefore improve the capability of whole-body controllers to cope with external disturbances, different working conditions and generic whole-body motions

    Humanoid Robot Cooperative Motion Control Based on Optimal Parameterization

    Get PDF
    The implementation of low-energy cooperative movements is one of the key technologies for the complex control of the movements of humanoid robots. A control method based on optimal parameters is adopted to optimize the energy consumption of the cooperative movements of two humanoid robots. A dynamic model that satisfies the cooperative movements is established, and the motion trajectory of two humanoid robots in the process of cooperative manipulation of objects is planned. By adopting the control method with optimal parameters, the parameters optimization of the energy consumption index function is performed and the stability judgment index of the robot in the movement process is satisfied. Finally, the effectiveness of the method is verified by simulations and experimentations

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thèse vise à étudier des systèmes et des outils pour la télé-opération d’un robot humanoïde.La téléopération de robots est cruciale pour envoyer et contrôler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scénarios d’interventionen cas de catastrophe, des environnements contaminés ou des sites extraterrestres). Le terme téléopérationdésigne le plus souvent le contrôle direct et continu d’un robot. Dans ce cas, l’opérateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrôle. L’un des principaux défis est de contrôler le robot de manière à garantir son équilibredynamique tout en essayant de suivre les références humaines. De plus, l’opérateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs à distance afind’appréhender la situation ou de se sentir physiquement présent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le réseau de communicationn’est pas idéal. Dans ce cas, les commandes de l’homme au robot ainsi que la rétroaction du robotà l’homme peuvent être retardées. Ces délais peuvent être très gênants pour l’opérateur humain,qui ne peut pas télé-opérer efficacement son avatar robotique.Un autre point crucial à considérer lors de la mise en place d’un système de télé-opérationest le grand nombre de paramètres qui doivent être réglés pour contrôler efficacement les robotstélé-opérés. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventêtre utilisés pour automatiser l’apprentissage de certains paramètres.Dans cette thèse, nous avons proposé un système de télé-opération qui a été testé sur le robothumanoïde iCub. Nous avons utilisé une combinaison de capture de mouvement basée sur latechnologie inertielle comme périphérique de contrôle pour l’humanoïde et un casque de réalitévirtuelle connecté aux caméras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques équivalents en développant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilité du mouvement transféré. Nous avons ensuite implémenté un contrôleur du corps entierpour permettre au robot de suivre le mouvement humain reciblé. Le contrôleur a ensuite étéoptimisé en simulation pour obtenir un bon suivi des mouvements de référence du corps entier,en recourant à un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot réel en quelques essais.Pour télé-opérer les mouvements de marche, nous avons implémenté un mode de télé-opérationde niveau supérieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde référence au robot. Nous avons intégré ce paramètre dans le système de télé-opération, ce quipermet à l’utilisateur de basculer entre les deux modes différents.Un problème majeur empêchant le déploiement de tels systèmes dans des applications réellesest la présence de retards de communication entre l’entrée humaine et le retour du robot: mêmequelques centaines de millisecondes de retard peuvent irrémédiablement perturber l’opérateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un système danslequel un robot humanoïde exécute des commandes avant de les recevoir, de sorte que le retourvisuel semble être synchronisé avec l’opérateur, alors que le robot exécutait les commandes dansle passé. Pour ce faire, le robot prédit en permanence les commandes futures en interrogeant unmodèle d’apprentissage automatique formé sur les trajectoires passées et conditionné aux dernièrescommandes reçues

    Learning Control Policies for Fall Prevention and Safety in Bipedal Locomotion

    Get PDF
    The ability to recover from an unexpected external perturbation is a fundamental motor skill in bipedal locomotion. An effective response includes the ability to not just recover balance and maintain stability but also to fall in a safe manner when balance recovery is physically infeasible. For robots associated with bipedal locomotion, such as humanoid robots and assistive robotic devices that aid humans in walking, designing controllers which can provide this stability and safety can prevent damage to robots or prevent injury related medical costs. This is a challenging task because it involves generating highly dynamic motion for a high-dimensional, non-linear and under-actuated system with contacts. Despite prior advancements in using model-based and optimization methods, challenges such as requirement of extensive domain knowledge, relatively large computational time and limited robustness to changes in dynamics still make this an open problem. In this thesis, to address these issues we develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots : Humanoid robots and assistive robotic devices that assist in bipedal locomotion. Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices. To achieve this, we introduce a set of Deep Reinforcement Learning (DRL) algorithms to learn control policies that improve safety while using these robots. To enable efficient learning, we present techniques to incorporate abstract dynamical models, curriculum learning and a novel method of building a graph of policies into the learning framework. We also propose an approach to create virtual human walking agents which exhibit similar gait characteristics to real-world human subjects, using which, we learn an assistive device controller to help virtual human return to steady state walking after an external push is applied. Finally, we extend our work on assistive devices and address the challenge of transferring a push-recovery policy to different individuals. As walking and recovery characteristics differ significantly between individuals, exoskeleton policies have to be fine-tuned for each person which is a tedious, time consuming and potentially unsafe process. We propose to solve this by posing it as a transfer learning problem, where a policy trained for one individual can adapt to another without fine tuning.Ph.D

    Human-In-The-Loop Control and Task Learning for Pneumatically Actuated Muscle Based Robots

    Get PDF
    Pneumatically actuated muscles (PAMs) provide a low cost, lightweight, and high power-to-weight ratio solution for many robotic applications. In addition, the antagonist pair configuration for robotic arms make it open to biologically inspired control approaches. In spite of these advantages, they have not been widely adopted in human-in-the-loop control and learning applications. In this study, we propose a biologically inspired multimodal human-in-the-loop control system for driving a one degree-of-freedom robot, and realize the task of hammering a nail into a wood block under human control. We analyze the human sensorimotor learning in this system through a set of experiments, and show that effective autonomous hammering skill can be readily obtained through the developed human-robot interface. The results indicate that a human-in-the-loop learning setup with anthropomorphically valid multi-modal human-robot interface leads to fast learning, thus can be used to effectively derive autonomous robot skills for ballistic motor tasks that require modulation of impedance

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book
    • …
    corecore