22 research outputs found

    Physical Human-Robot Interaction Control of an Upper Limb Exoskeleton with a Decentralized Neuro-Adaptive Control Scheme

    Full text link
    Within the concept of physical human-robot interaction (pHRI), the most important criterion is the safety of the human operator interacting with a high degree of freedom (DoF) robot. Therefore, a robust control scheme is in high demand to establish safe pHRI and stabilize nonlinear, high DoF systems. In this paper, an adaptive decentralized control strategy is designed to accomplish the abovementioned objectives. To do so, a human upper limb model and an exoskeleton model are decentralized and augmented at the subsystem level to enable a decentralized control action design. Moreover, human exogenous force (HEF) that can resist exoskeleton motion is estimated using radial basis function neural networks (RBFNNs). Estimating both human upper limb and robot rigid body parameters, along with HEF estimation, makes the controller adaptable to different operators, ensuring their physical safety. The barrier Lyapunov function (BLF) is employed to guarantee that the robot can operate in a safe workspace while ensuring stability by adjusting the control law. Unknown actuator uncertainty and constraints are also considered in this study to ensure a smooth and safe pHRI. Then, the asymptotic stability of the whole system is established by means of the virtual stability concept and virtual power flows (VPFs) under the proposed robust controller. The experimental results are presented and compared to proportional-derivative (PD) and proportional-integral-derivative (PID) controllers. To show the robustness of the designed controller and its good performance, experiments are performed at different velocities, with different human users, and in the presence of unknown disturbances. The proposed controller showed perfect performance in controlling the robot, whereas PD and PID controllers could not even ensure stable motion in the wrist joints of the robot

    Resolving conflicts during human-robot co-manipulation

    Get PDF
    UK Research and Innovation, UKRI: EP/S033718/2, EP/T022493/1, EP/V00784XThis work is partially funded by UKRI and CHIST-ERA (HEAP: EP/S033718/2; Horizon: EP/T022493/1; TAS Hub: EP/V00784X).This paper proposes a machine learning (ML) approach to detect and resolve motion conflicts that occur between a human and a proactive robot during the execution of a physically collaborative task. We train a random forest classifier to distinguish between harmonious and conflicting human-robot interaction behaviors during object co-manipulation. Kinesthetic information generated through the teamwork is used to describe the interactive quality of collaboration. As such, we demonstrate that features derived from haptic (force/torque) data are sufficient to classify if the human and the robot harmoniously manipulate the object or they face a conflict. A conflict resolution strategy is implemented to get the robotic partner to proactively contribute to the task via online trajectory planning whenever interactive motion patterns are harmonious, and to follow the human lead when a conflict is detected. An admittance controller regulates the physical interaction between the human and the robot during the task. This enables the robot to follow the human passively when there is a conflict. An artificial potential field is used to proactively control the robot motion when partners work in harmony. An experimental study is designed to create scenarios involving harmonious and conflicting interactions during collaborative manipulation of an object, and to create a dataset to train and test the random forest classifier. The results of the study show that ML can successfully detect conflicts and the proposed conflict resolution mechanism reduces human force and effort significantly compared to the case of a passive robot that always follows the human partner and a proactive robot that cannot resolve conflicts. © 2023 Copyright is held by the owner/author(s).2-s2.0-8515037875

    Human-robot co-carrying using visual and force sensing

    Get PDF
    In this paper, we propose a hybrid framework using visual and force sensing for human-robot co-carrying tasks. Visual sensing is utilized to obtain human motion and an observer is designed for estimating control input of human, which generates robot's desired motion towards human's intended motion. An adaptive impedance-based control strategy is proposed for trajectory tracking with neural networks (NNs) used to compensate for uncertainties in robot's dynamics. Motion synchronization is achieved and this approach yields a stable and efficient interaction behavior between human and robot, decreases human control effort and avoids interference to human during the interaction. The proposed framework is validated by a co-carrying task in simulations and experiments

    Interface Design for Physical Human-Robot Interaction using sensorless control

    Get PDF
    The rapid increase in the usage of robots has made interaction between a human and a robot a crucial field of research. Physical human–robot interaction constitutes a relevant and growing research area. Nowadays robots are used in almost all areas of life, such as in households, for education and in medicine. Therefore, many research studies are being conducted on ergonomic human–robot interfaces enabling people to communicate, collaborate and to teach a robot through physical interaction.This thesis is focused on developing a physical human-robot interface by means of which the user is able to control a walking humanoid by exerting force. Through physical contact with the robot arm, a human can influence the direction and velocity of the robot walk. In other words, the user leads the humanoid by the hand, and the robot compensates this external force by following the user.The developed interface offers a method of sensorless force control. Instead of the traditional approach using force/torque measurement, the fact that a DC motor’s torque is proportional to the armature current was applied. Two different control algorithms were implemented and compared. Consequently, a usability test was conducted for different interfaces to find the one which was the most ergonomic

    Energy-based control approaches in human-robot collaborative disassembly

    Get PDF

    Safety Awareness for Rigid and Elastic Joint Robots: An Impact Dynamics and Control Framework

    Get PDF
    This thesis aims at making robots with rigid and elastic joints aware of human collision safety. A framework is proposed that captures human injury occurrence and robot inherent safety properties in a unified manner. It allows to quantitatively compare and optimize the safety characteristics of different robot designs and is applied to stationary and mobile manipulators. On the same basis, novel motion control schemes are developed and experimentally validated

    Computer Simulation of Human-Robot Collaboration in the Context of Industry Revolution 4.0

    Get PDF
    The essential role of robot simulation for industrial robots, in particular the collaborative robots is presented in this chapter. We begin by discussing the robot utilization in the industry which includes mobile robots, arm robots, and humanoid robots. The author emphasizes the application of collaborative robots in regard to industry revolution 4.0. Then, we present how the collaborative robot utilization in the industry can be achieved through computer simulation by means of virtual robots in simulated environments. The robot simulation presented here is based on open dynamic engine (ODE) using anyKode Marilou. The author surveys on the use of dynamic simulations in application of collaborative robots toward industry 4.0. Due to the challenging problems which related to humanoid robots for collaborative robots and behavior in human-robot collaboration, the use of robot simulation may open the opportunities in collaborative robotic research in the context of industry 4.0. As developing a real collaborative robot is still expensive and time-consuming, while accessing commercial collaborative robots is relatively limited; thus, the development of robot simulation can be an option for collaborative robotic research and education purposes

    Temporal models of motions and forces for Human-Robot Interactive manipulation

    Get PDF
    L'intérêt pour la robotique a débuté dans les années 70 et depuis les robots n'ont cessé de remplacer les humains dans l'industrie. L'automatisation à outrance n'apporte cependant pas que des avantages, car elle nécessite des environnements parfaitement contrôlés et la reprogrammation d'une tâche est longue et fastidieuse. Le besoin accru d'adaptabilité et de ré-utilisabilité des systèmes d'assemblage force la robotique à se révolutionner en amenant notamment l'homme et le robot à interagir. Ce nouveau type de collaboration permet de combiner les forces respectives des humains et des robots. Cependant l'homme ne pourra être inclus en tant qu'agent actif dans ces nouveaux espaces de travail collaboratifs que si l'on dispose de robots sûrs, intuitifs et facilement reprogrammables. C'est à la lumière de ce constat qu'on peut deviner le rôle crucial de la génération de mouvement pour les robots de demain. Pour que les humains et les robots puissent collaborer, ces derniers doivent générer des mouvements sûrs afin de garantir la sécurité de l'homme tant physique que psychologique. Les trajectoires sont un excellent modèle pour la génération de mouvements adaptés aux robots collaboratifs, car elles offrent une description simple et précise de l'évolution du mouvement. Les trajectoires dîtes souples sont bien connues pour générer des mouvements sûrs et confortables pour l'homme. Dans cette thèse nous proposons un algorithme de génération de trajectoires temps-réel basé sur des séquences de segments de fonctions polynomiales de degré trois pour construire des trajectoires souples. Ces trajectoires sont construites à partir de conditions initiales et finales arbitraires, une condition nécessaire pour que les robots soient capables de réagir instantanément à des événements imprévus. L'approche basée sur un modèle à jerk-contraint offre des solutions orientées performance: les trajectoires sont optimales en temps sous contraintes de sécurité. Ces contraintes de sécurité sont des contraintes cinématiques qui dépendent de la tâche et du contexte et doivent être spécifiées. Pour guider le choix de ces contraintes, nous avons étudié le rôle de la cinématique dans la définition des propriétés ergonomiques du mouvement. L'algorithme a également été étendu pour accepter des configurations initiales non admissibles permettant la génération de trajectoires sous contraintes cinématiques non constantes. Cette extension est essentielle dans le contexte des interactions physiques homme-robot, car le robot doit être capable d'adapter son comportement en temps-réel pour préserver la sécurité physique et psychologique des humains. Cependant considérer le problème de la génération de trajectoires ne suffit pas si on ne considère pas le contrôle. Le passage d'une trajectoire à une autre est un problème difficile pour la plupart des systèmes robotiques dans des contextes applicatifs réels. Pour cela, nous proposons une stratégie de contrôle réactif de ces trajectoires ainsi qu'une architecture construite autour de l'utilisation des trajectoires.It was in the 70s when the interest for robotics really emerged. It was barely half a century ago, and since then robots have been replacing humans in the industry. This robot-oriented solution doesn't come without drawbacks as full automation requires time-consuming programming as well as rigid environments. With the increased need for adaptability and reusability of assembly systems, robotics is undergoing major changes and see the emergence of a new type of collaboration between humans and robots. Human-Robot collaboration get the best of both world by combining the respective strengths of humans and robots. But, to include the human as an active agent in these new collaborative workspaces, safe and flexible robots are required. It is in this context that we can apprehend the crucial role of motion generation in tomorrow's robotics. For the emergence of human-robot cooperation, robots have to generate motions ensuring the safety of humans, both physical and physchological. For this reason motion generation has been a restricting factor to the growth of robotics in the past. Trajectories are excellent candidates in the making of desirable motions designed for collaborative robots, because they allow to simply and precisely describe the motions. Smooth trajectories are well known to provide safe motions with good ergonomic properties. In this thesis we propose an Online Trajectory Generation algorithm based on sequences of segment of third degree polynomial functions to build smooth trajectories. These trajectories are built from arbitrary initial and final conditions, a requirement for robots to be able to react instantaneously to unforeseen events. Our approach built on a constrained-jerk model offers performance-oriented solutions : the trajectories are time-optimal under safety constraints. These safety constraints are kinematic constraints that are task and context dependent and must be specified. To guide the choice of these constraints we investigated the role of kinematics in the definition of ergonomics properties of motions. We also extended our algorithm to cope with non-admissible initial configurations, opening the way to trajectory generation under non-constant motion constraints. This feature is essential in the context of physical Human-Robot Interactions, as the robot must adapt its behavior in real time to preserve both the physical and psychological safety of humans. However, only considering the trajectory generation problem is not enough and the control of these trajectories must be adressed. Switching from a trajectory to another is a difficult problem for most robotic systems in real applicative contexts. For this purpose we propose a strategy for the Reactive Control of these Trajectories as well as an architecture built around the use of trajectories

    Dyadic behavior in co-manipulation :from humans to robots

    Get PDF
    To both decrease the physical toll on a human worker, and increase a robot’s environment perception, a human-robot dyad may be used to co-manipulate a shared object. From the premise that humans are efficient working together, this work’s approach is to investigate human-human dyads co-manipulating an object. The co-manipulation is evaluated from motion capture data, surface electromyography (EMG) sensors, and custom contact sensors for qualitative performance analysis. A human-human dyadic co-manipulation experiment is designed in which every human is instructed to behave as a leader, as a follower or neither, acting as naturally as possible. The experiment data analysis revealed that humans modulate their arm mechanical impedance depending on their role during the co-manipulation. In order to emulate the human behavior during a co-manipulation task, an admittance controller with varying stiffness is presented. The desired stiffness is continuously varied based on a scalar and smooth function that assigns a degree of leadership to the robot. Furthermore, the controller is analyzed through simulations, its stability is analyzed by Lyapunov. The resulting object trajectories greatly resemble the patterns seen in the human-human dyad experiment.Para tanto diminuir o esforço físico de um humano, quanto aumentar a percepção de um ambiente por um robô, um díade humano-robô pode ser usado para co-manipulação de um objeto compartilhado. Partindo da premissa de que humanos são eficientes trabalhando juntos, a abordagem deste trabalho é a de investigar díades humano-humano co-manipulando um objeto compartilhado. A co-manipulação é avaliada a partir de dados de um sistema de captura de movimentos, sinais de eletromiografia (EMG), e de sensores de contato customizados para análise qualitativa de desempenho. Um experimento de co-manipulação com díades humano-humano foi projetado no qual cada humano é instruído a se comportar como um líder, um seguidor, ou simplesmente agir tão naturalmente quanto possível. A análise de dados do experimento revelou que os humanos modulam a rigidez mecânica do braço a depender de que tipo de comportamento eles foram designados antes da co-manipulação. Para emular o comportamento humano durante uma tarefa de co-manipulação, um controle por admitância com rigidez variável é apresentado neste trabalho. A rigidez desejada é continuamente variada com base em uma função escalar suave que define o grau de liderança do robô. Além disso, o controlador é analisado por meio de simulações, e sua estabilidade é analisada pela teoria de Lyapunov. As trajetórias resultantes do uso do controlador mostraram um padrão de comportamento muito parecido ao do experimento com díades humano-humano

    Evaluation of Presence in Virtual Environments: Haptic Vest and User's Haptic Skills

    Get PDF
    This paper presents the integration of a haptic vest with a multimodal virtual environment, consisting of video, audio, and haptic feedback, with the main objective of determining how users, who interact with the virtual environment, benefit from tactile and thermal stimuli provided by the haptic vest. Some experiments are performed using a game application of a train station after an explosion. The participants of this experiment have to move inside the environment, while receiving several stimuli to check if any improvement in presence or realism in that environment is reflected on the vest. This is done by comparing the experimental results with those similar scenarios, obtained without haptic feedback. These experiments are carried out by three groups of participants who are classified on the basis of their experience in haptics and virtual reality devices. Some differences among the groups have been found, which can be related to the levels of realism and synchronization of all the elements in the multimodal environment that fulfill the expectations and maximum satisfaction level. According to the participants in the experiment, two different levels of requirements are to be defined by the system to comply with the expectations of professional and conventional users
    corecore