82 research outputs found

    Advances in Human-Robot Handshaking

    Full text link
    The use of social, anthropomorphic robots to support humans in various industries has been on the rise. During Human-Robot Interaction (HRI), physically interactive non-verbal behaviour is key for more natural interactions. Handshaking is one such natural interaction used commonly in many social contexts. It is one of the first non-verbal interactions which takes place and should, therefore, be part of the repertoire of a social robot. In this paper, we explore the existing state of Human-Robot Handshaking and discuss possible ways forward for such physically interactive behaviours.Comment: Accepted at The 12th International Conference on Social Robotics (ICSR 2020) 12 Pages, 1 Figur

    Human-Robot Handshaking: A Review

    Full text link
    For some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.Comment: Pre-print version. Accepted for publication in the International Journal of Social Robotic

    Physical Analysis of Handshaking Between Humans: Mutual Synchronisation and Social Context

    Get PDF
    International audienceOne very popular form of interpersonal interaction used in various situations is the handshake (HS), which is an act that is both physical and social. This article aims to demonstrate that the paradigm of synchrony that refers to the psychology of individuals' temporal movement coordination could also be considered in handshaking. For this purpose, the physical features of the human HS are investigated in two different social situations: greeting and consolation. The duration and frequency of the HS and the force of the grip have been measured and compared using a prototype of a wearable system equipped with several sensors. The results show that an HS can be decomposed into four phases, and after a short physical contact, a synchrony emerges between the two persons who are shaking hands. A statistical analysis conducted on 31 persons showed that, in the two different contexts, there is a significant difference in the duration of HS, but the frequency of motion and time needed to synchronize were not impacted by the context of an interaction

    INTELLIGENT CONTROL AND LEARNING OF ROBOTS INTERACTING WITH ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    肘関節粘弾性特性分析に基づいた可変粘弾性握手マニピュレータの開発

    Get PDF
    【学位授与の要件】中央大学学位規則第4条第1項【論文審査委員主査】中村 太郎 (中央大学理工学部教授)【論文審査委員副査】平岡 弘之(中央大学理工学部教授)、新妻 実保子(中央大学理工学部准教授)、諸麥 俊司(中央大学理工学部准教授)、万 偉偉(大阪大学准教授)博士(工学)中央大

    Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots

    Get PDF
    Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously

    Proceedings of the NASA Conference on Space Telerobotics, volume 4

    Get PDF
    Papers presented at the NASA Conference on Space Telerobotics are compiled. The theme of the conference was man-machine collaboration in space. The conference provided a forum for researchers and engineers to exchange ideas on the research and development required for the application of telerobotic technology to the space systems planned for the 1990's and beyond. Volume 4 contains papers related to the following subject areas: manipulator control; telemanipulation; flight experiments (systems and simulators); sensor-based planning; robot kinematics, dynamics, and control; robot task planning and assembly; and research activities at the NASA Langley Research Center

    Continuous goal-directed actions: advances in robot learning

    Get PDF
    Robot Programming by Demonstration (PbD) has several limitations. This thesis proposes a solution to the shortcomings of PbD with an inspiration on Goal-Directed imitation applied to robots. A framework for goal imitation, called Continuous Goal-Directed Actions (CGDA), has been designed and developed. This framework provides a mechanism to encode actions as changes in the environment. CGDA learns the objective of the action, beyond the movements made to perform it. With CGDA, an action such as “painting a wall” can be learned as “the wall changed its color a 50% from blue to red”. Traditional robot imitation paradigms such as PbD would learn the same action as ”move joint i 30 degrees, then joint j 43 degrees...”. This thesis’ main contribution is innovative in providing a framework able to measure and generalize the effects of actions. It also innovates by creating metrics to compare and reproduce goal-directed actions. Reproducing actions encoded in terms of goals allows a robot-configuration independence when reproducing tasks. This innovation allows to circumvent the correspondence problem (adapting the kinematic parameters from humans to robots). CGDA can complement current kinematic-focused paradigms, such as PbD, in robot imitation. CGDA action encoding is centered on the changes an action produces on the features of objects altered during the action. The features can be any measurable characteristic of the objects such as color, area, form, etc. By tracking object features during human action demonstrations, a high dimensional feature trajectory is created. This trajectory represents a finely-grained sequence of object temporal states during the action. This trajectory is the main resource for the generalization, recognition and execution of actions in CGDA. Around this presented framework, several components have been added to facilitate and improve the imitation. Naïve implementations of robot learning frameworks usually assume that all the data from the user demonstrations has been correctly sensed and is relevant to the task. This assumption proves wrong in most human-demonstrated learning scenarios. This thesis presents an automatic demonstration and feature selection process to solve this issue. This machine learning pipeline is called Dissimilarity Mapping Filtering (DMF). DMF can filter both irrelevant demonstrations and irrelevant features. Once an action is generalized from a series of correct human demonstrations, the robot must be provided a method to reproduce this action. Robot joint trajectories are computed in simulation using evolutionary computation through diverse proposed strategies. This computation can be improved by using human-robot interaction. Specifically, a system for robot discovery of motor primitives from random human-guided movements has been developed. These Guided Motor Primitives (GMP) are combined to reproduce goal-directed actions. To test all these developments, experiments have been performed using a humanoid robot in a simulated environment, and the real full-sized humanoid robot TEO. A brief analysis about the cyber safety of current robots is additionally presented in the final appendices of this thesis.Robot Programming by demonstration (PbD) tiene varias limitaciones. Esta tesis propone una solución a las carencias de PbD, inspirándose en la imitación dirigida a objetivos en robots. Se ha diseñado y desarrollado un marco de trabajo para la imitación de objetivos llamado Continuous Goal-Directed Actions (CGDA). Este marco de trabajo proporciona un mecanismo para codificar acciones como cambios en el entorno. CGDA aprende los objetivos de la acción, mas allá de los movimientos hechos para realizarla. Con CGDA, una acción como “pintar una pared” se puede aprender como “la pared cambió su color un 50% de azul a rojo”. Paradigmas tradicionales de imitación robótica como PbD aprenderían la misma acción como “mueve la articulación i 30 grados, luego la articulación j 43 grados...”. La contribución principal de esta tesis es innovadora en proporcionar un marco de trabajo capaz de medir y generalizar los efectos de las acciones. También innova al crear métricas para comparar y reproducir acciones dirigidas a objetivos. Reproducir acciones codificadas como objetivos permite independizarse de la configuración del robot cuando se reproducen las acciones. Esta innovación permite sortear el problema de la correspondencia (adaptar los parámetros cinemáticos de los humanos a los robots). CGDA puede complementar paradigmas centrados en la cinemática, como PbD, en la imitación robótica. CGDA codifica las acciones centrándose en los cambios producidos por la acción en las características de los objetos afectados por ésta. Las características pueden ser cualquier rasgo medible de los objetos, como color, área, forma, etc. Midiendo las características de los objetos durante las demostraciones humanas se crea una trayectoria de alta dimensionalidad. Esta trayectoria representa una detallada secuencia de los estados temporales del objeto durante la acción. Esta trayectoria es el recurso principal para la generalización, el reconocimiento y la ejecución de acciones en CGDA. Alrededor del marco de trabajo presentado, se han añadido algunos componentes para facilitar y mejorar la imitación. Las implementaciones simples en aprendizaje robótico normalmente asumen que todos los datos provenientes de las demostraciones del usuario han sido correctamente medidos y son relevantes para la tarea. Esta suposición se demuestra falsa en la mayoría de escenarios de aprendizaje por demostración humana. Esta tesis presenta un proceso de selección automático de demostraciones y características para resolver este problema. Este proceso de aprendizaje automático se llama Dissimilarity Mapping Filtering (DMF). DMF puede filtrar tanto demostraciones irrelevantes, como características innecesarias. Una vez que una acción se ha generalizado a partir de una serie de demostraciones humanas, es necesario proveer al robot de un método para reproducir la acción. Las trayectorias articulares del robot se computan en simulación usando computación evolutiva. Esta computación se puede mejorar usando interacción humano-robot. Específicamente, se ha desarrollado un sistema para el descubrimiento de primitivas de movimiento del robot a partir de movimientos aleatorios, guiados por el humano. Estas primitivas, llamadas Guided Motor Primitives (GMP), se combinan para reproducir acciones centradas en objetivos. Para probar estos desarrollos, los experimentos se han llevado a cabo usando un robot humanoide en un entorno simulado, y el robot humanoide real TEO. En los apéndices finales de esta tesis se presenta un breve análisis de la ciberseguridad de los robots actuales.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Vicente Matellán Olivera.- Secretario: María Dolores Blanco Rojas.- Vocal: Antonio Barrientos Cru
    corecore