5,705 research outputs found

    Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction

    Get PDF
    Meyer zu Borgsen S. Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction. Bielefeld: Universität Bielefeld; 2020.This doctoral thesis investigates the influence of nonverbal communication on human-robot object handover. Handing objects to one another is an everyday activity where two individuals cooperatively interact. Such close interactions incorporate a lot of nonverbal communication in order to create alignment in space and time. Understanding and transferring communication cues to robots becomes more and more important as e.g. service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot interaction. A lot of work in this field of research focuses on speed, accuracy, and predictability of the robot’s movement during object handover. Still, robots need to be enabled to closely interact with naive users and not only experts. In this work I present how nonverbal communication can be implemented in robots to facilitate smooth handovers. I conducted a study on people with different levels of experience exchanging objects with a humanoid robot. It became clear that especially users with only little experience in regard to interaction with robots rely heavily on the communication cues they are used to on the basis of former interactions with humans. I added different gestures with the second arm, not directly involved in the transfer, to analyze the influence on synchronization, predictability, and human acceptance. Handing an object has a special movement trajectory itself which has not only the purpose of bringing the object or hand to the position of exchange but also of socially signalizing the intention to exchange an object. Another common type of nonverbal communication is gaze. It allows guessing the focus of attention of an interaction partner and thus helps to predict the next action. In order to evaluate handover interaction performance between human and robot, I applied the developed concepts to the humanoid robot Meka M1. By adding the humanoid robot head named Floka Head to the system, I created the Floka humanoid, to implement gaze strategies that aim to increase predictability and user comfort. This thesis contributes to the field of human-robot object handover by presenting study outcomes and concepts along with an implementation of improved software modules resulting in a fully functional object handing humanoid robot from perception and prediction capabilities to behaviors enhanced and improved by features of nonverbal communication

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Technology for an intelligent, free-flying robot for crew and equipment retrieval in space

    Get PDF
    Crew rescue and equipment retrieval is a Space Station Freedom requirement. During Freedom's lifetime, there is a high probability that a number of objects will accidently become separated. Members of the crew, replacement units, and key tools are examples. Retrieval of these objects within a short time is essential. Systems engineering studies were conducted to identify system requirements and candidate approaches. One such approach, based on a voice-supervised, intelligent, free-flying robot was selected for further analysis. A ground-based technology demonstration, now in its second phase, was designed to provide an integrated robotic hardware and software testbed supporting design of a space-borne system. The ground system, known as the EVA Retriever, is examining the problem of autonomously planning and executing a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles. The current prototype is an anthropomorphic manipulator unit with dexterous arms and hands attached to a robot body and latched in a manned maneuvering unit. A precision air-bearing floor is used to simulate space. Sensor data include two vision systems and force/proximity/tactile sensors on the hands and arms. Planning for a shuttle file experiment is underway. A set of scenarios and strawman requirements were defined to support conceptual development. Initial design activities are expected to begin in late 1989 with the flight occurring in 1994. The flight hardware and software will be based on lessons learned from both the ground prototype and computer simulations

    Model-based myoelectric control of robots for assistance and rehabilitation

    Get PDF
    The first anthropomorphic robots and exoskeletons were developed with the idea of combining man and machine into an intimate symbiotic unit that can perform as one joint system. A human-robot interface consists of processes of two different nature: (1) the physical interaction (pHRI) between the device and its user and (2) the exchange of cognitive information (cHRI) between the human and the robot. To achieve the symbiosis between the two actors, both need to be optimized. The evolution of mechanical design and the introduction of new materials pushed pHRI to new frontiers on ergonomics and assistance performance. However, cHRI still lacks on this direction because is more complicated: it requires communication from the cognitive processes occuring in the human agent to the robot, e.g. intention detection; but also from the robot to the human agent, e.g. feedback modalities such as haptic cues. A possible innovation is the inclusion of the electromyographic signal, the command signal from our brain to the musculoskeletal system for the movement, in the robot control loop. The aim of this thesis was to develop a real-time control framework for an assistive device that can generate the same force produced by the muscles. To do this, I incorporated in the robot control loop a detailed musculoskeletal model that estimates the net torque at the joint level by taking as inputs the electromyography signals and kinematic data. This module is called myoprocessor. Here I present two applications of this control approach: the first was implemented on a soft wearable arm exosuit in order to evaluate the adaptation of the controller on different motion and loads. The second one, was a generation of myoprocessor-driven force field on a planar robot manipulandum in order to study the modularity changes of the musculoskeletal system. Both applications showed that the device controlled by myoprocessor works symbiotically with the user, by reducing the muscular activity and preserving the motor performance. The ability of seamlessly combining musculoskeletal force estimators with assistive devices opens new avenues for assisting human movement both in healthy and impaired individuals

    Toward simple control for complex, autonomous robotic applications: combining discrete and rhythmic motor primitives

    Get PDF
    Vertebrates are able to quickly adapt to new environments in a very robust, seemingly effortless way. To explain both this adaptivity and robustness, a very promising perspective in neurosciences is the modular approach to movement generation: Movements results from combinations of a finite set of stable motor primitives organized at the spinal level. In this article we apply this concept of modular generation of movements to the control of robots with a high number of degrees of freedom, an issue that is challenging notably because planning complex, multidimensional trajectories in time-varying environments is a laborious and costly process. We thus propose to decrease the complexity of the planning phase through the use of a combination of discrete and rhythmic motor primitives, leading to the decoupling of the planning phase (i.e. the choice of behavior) and the actual trajectory generation. Such implementation eases the control of, and the switch between, different behaviors by reducing the dimensionality of the high-level commands. Moreover, since the motor primitives are generated by dynamical systems, the trajectories can be smoothly modulated, either by high-level commands to change the current behavior or by sensory feedback information to adapt to environmental constraints. In order to show the generality of our approach, we apply the framework to interactive drumming and infant crawling in a humanoid robot. These experiments illustrate the simplicity of the control architecture in terms of planning, the integration of different types of feedback (vision and contact) and the capacity of autonomously switching between different behaviors (crawling and simple reaching

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces

    Generating whole body movements for dynamics anthropomorphic systems under constraints

    Get PDF
    Cette thèse étudie la question de la génération de mouvements corps-complet pour des systèmes anthropomorphes. Elle considère le problème de la modélisation et de la commande en abordant la question difficile de la génération de mouvements ressemblant à ceux de l'homme. En premier lieu, un modèle dynamique du robot humanoïde HRP-2 est élaboré à partir de l'algorithme récursif de Newton-Euler pour les vecteurs spatiaux. Un nouveau schéma de commande dynamique est ensuite développé, en utilisant une cascade de programmes quadratiques (QP) optimisant des fonctions coûts et calculant les couples de commande en satisfaisant des contraintes d'égalité et d'inégalité. La cascade de problèmes quadratiques est définie par une pile de tâches associée à un ordre de priorité. Nous proposons ensuite une formulation unifiée des contraintes de contacts planaires et nous montrons que la méthode proposée permet de prendre en compte plusieurs contacts non coplanaires et généralise la contrainte usuelle du ZMP dans le cas où seulement les pieds sont en contact avec le sol. Nous relions ensuite les algorithmes de génération de mouvement issus de la robotique aux outils de capture du mouvement humain en développant une méthode originale de génération de mouvement visant à imiter le mouvement humain. Cette méthode est basée sur le recalage des données capturées et l'édition du mouvement en utilisant le solveur hiérarchique précédemment introduit et la définition de tâches et de contraintes dynamiques. Cette méthode originale permet d'ajuster un mouvement humain capturé pour le reproduire fidèlement sur un humanoïde en respectant sa propre dynamique. Enfin, dans le but de simuler des mouvements qui ressemblent à ceux de l'homme, nous développons un modèle anthropomorphe ayant un nombre de degrés de liberté supérieur à celui du robot humanoïde HRP2. Le solveur générique est utilisé pour simuler le mouvement sur ce nouveau modèle. Une série de tâches est définie pour décrire un scénario joué par un humain. Nous montrons, par une simple analyse qualitative du mouvement, que la prise en compte du modèle dynamique permet d'accroitre naturellement le réalisme du mouvement.This thesis studies the question of whole body motion generation for anthropomorphic systems. Within this work, the problem of modeling and control is considered by addressing the difficult issue of generating human-like motion. First, a dynamic model of the humanoid robot HRP-2 is elaborated based on the recursive Newton-Euler algorithm for spatial vectors. A new dynamic control scheme is then developed adopting a cascade of quadratic programs (QP) optimizing the cost functions and computing the torque control while satisfying equality and inequality constraints. The cascade of the quadratic programs is defined by a stack of tasks associated to a priority order. Next, we propose a unified formulation of the planar contact constraints, and we demonstrate that the proposed method allows taking into account multiple non coplanar contacts and generalizes the common ZMP constraint when only the feet are in contact with the ground. Then, we link the algorithms of motion generation resulting from robotics to the human motion capture tools by developing an original method of motion generation aiming at the imitation of the human motion. This method is based on the reshaping of the captured data and the motion editing by using the hierarchical solver previously introduced and the definition of dynamic tasks and constraints. This original method allows adjusting a captured human motion in order to reliably reproduce it on a humanoid while respecting its own dynamics. Finally, in order to simulate movements resembling to those of humans, we develop an anthropomorphic model with higher number of degrees of freedom than the one of HRP-2. The generic solver is used to simulate motion on this new model. A sequence of tasks is defined to describe a scenario played by a human. By a simple qualitative analysis of motion, we demonstrate that taking into account the dynamics provides a natural way to generate human-like movements
    corecore