864 research outputs found

    Emergent coordination between humans and robots

    Get PDF
    Emergent coordination or movement synchronization is an often observed phenomenon in human behavior. Humans synchronize their gait when walking next to each other, they synchronize their postural sway when standing closely, and they also synchronize their movement behavior in many other situations of daily life. Why humans are doing this is an important question of ongoing research in many disciplines: apparently movement synchronization plays a role in children’s development and learning; it is related to our social and emotional behavior in interaction with others; it is an underlying principle in the organization of communication by means of language and gesture; and finally, models explaining movement synchronization between two individuals can also be extended to group behavior. Overall, one can say that movement synchronization is an important principle of human interaction behavior. Besides interacting with other humans, in recent years humans do more and more interact with technology. This was first expressed in the interaction with machines in industrial settings, was taken further to human-computer interaction and is now facing a new challenge: the interaction with active and autonomous machines, the interaction with robots. If the vision of today’s robot developers comes true, in the near future robots will be fully integrated not only in our workplace, but also in our private lives. They are supposed to support humans in activities of daily living and even care for them. These circumstances however require the development of interactional principles which the robot can apply to the direct interaction with humans. In this dissertation the problem of robots entering the human society will be outlined and the need for the exploration of human interaction principles that are transferable to human-robot interaction will be emphasized. Furthermore, an overview on human movement synchronization as a very important phenomenon in human interaction will be given, ranging from neural correlates to social behavior. The argument of this dissertation is that human movement synchronization is a simple but striking human interaction principle that can be applied in human-robot interaction to support human activity of daily living, demonstrated on the example of pick-and-place tasks. This argument is based on five publications. In the first publication, human movement synchronization is explored in goal-directed tasks which bare similar requirements as pick-and-place tasks in activities of daily living. In order to explore if a merely repetitive action of the robot is sufficient to encourage human movement synchronization, the second publication reports a human-robot interaction study in which a human interacts with a non-adaptive robot. Here however, movement synchronization between human and robot does not emerge, which underlines the need for adaptive mechanisms. Therefore, in the third publication, human adaptive behavior in goal-directed movement synchronization is explored. In order to make the findings from the previous studies applicable to human-robot interaction, in the fourth publication the development of an interaction model based on dynamical systems theory is outlined which is ready for implementation on a robotic platform. Following this, a brief overview on a first human-robot interaction study based on the developed interaction model is provided. The last publication describes an extension of the previous approach which also includes the human tendency to make use of events to adapt their movements to. Here, also a first human-robot interaction study is reported which confirms the applicability of the model. The dissertation concludes with a discussion on the presented findings in the light of human-robot interaction and psychological aspects of joint action research as well as the problem of mutual adaptation.Spontan auftretende Koordination oder Bewegungssynchronisierung ist ein häufig zu beobachtendes Phänomen im Verhalten von Menschen. Menschen synchronisieren ihre Schritte beim nebeneinander hergehen, sie synchronisieren die Schwingbewegung zum Ausgleich der Körperbalance wenn sie nahe beieinander stehen und sie synchronisieren ihr Bewegungsverhalten generell in vielen weiteren Handlungen des täglichen Lebens. Die Frage nach dem warum ist eine Frage mit der sich die Forschung in der Psychologie, Neuro- und Bewegungswissenschaft aber auch in der Sozialwissenschaft nach wie vor beschäftigt: offenbar spielt die Bewegungssynchronisierung eine Rolle in der kindlichen Entwicklung und beim Erlernen von Fähigkeiten und Verhaltensmustern; sie steht in direktem Bezug zu unserem sozialen Verhalten und unserer emotionalen Wahrnehmung in der Interaktion mit Anderen; sie ist ein grundlegendes Prinzip in der Organisation von Kommunikation durch Sprache oder Gesten; außerdem können Modelle, die Bewegungssynchronisierung zwischen zwei Individuen erklären, auch auf das Verhalten innerhalb von Gruppen ausgedehnt werden. Insgesamt kann man also sagen, dass Bewegungssynchronisierung ein wichtiges Prinzip im menschlichen Interaktionsverhalten darstellt. Neben der Interaktion mit anderen Menschen interagieren wir in den letzten Jahren auch zunehmend mit der uns umgebenden Technik. Hier fand zunächst die Interaktion mit Maschinen im industriellen Umfeld Beachtung, später die Mensch-Computer-Interaktion. Seit kurzem sind wir jedoch mit einer neuen Herausforderung konfrontiert: der Interaktion mit aktiven und autonomen Maschinen, Maschinen die sich bewegen und aktiv mit Menschen interagieren, mit Robotern. Sollte die Vision der heutigen Roboterentwickler Wirklichkeit werde, so werden Roboter in der nahen Zukunft nicht nur voll in unser Arbeitsumfeld integriert sein, sondern auch in unser privates Leben. Roboter sollen den Menschen in ihren täglichen Aktivitäten unterstützen und sich sogar um sie kümmern. Diese Umstände erfordern die Entwicklung von neuen Interaktionsprinzipien, welche Roboter in der direkten Koordination mit dem Menschen anwenden können. In dieser Dissertation wird zunächst das Problem umrissen, welches sich daraus ergibt, dass Roboter zunehmend Einzug in die menschliche Gesellschaft finden. Außerdem wird die Notwendigkeit der Untersuchung menschlicher Interaktionsprinzipien, die auf die Mensch-Roboter-Interaktion transferierbar sind, hervorgehoben. Die Argumentation der Dissertation ist, dass die menschliche Bewegungssynchronisierung ein einfaches aber bemerkenswertes menschliches Interaktionsprinzip ist, welches in der Mensch-Roboter-Interaktion angewendet werden kann um menschliche Aktivitäten des täglichen Lebens, z.B. Aufnahme-und-Ablege-Aufgaben (pick-and-place tasks), zu unterstützen. Diese Argumentation wird auf fünf Publikationen gestützt. In der ersten Publikation wird die menschliche Bewegungssynchronisierung in einer zielgerichteten Aufgabe untersucht, welche die gleichen Anforderungen erfüllt wie die Aufnahme- und Ablageaufgaben des täglichen Lebens. Um zu untersuchen ob eine rein repetitive Bewegung des Roboters ausreichend ist um den Menschen zur Etablierung von Bewegungssynchronisierung zu ermutigen, wird in der zweiten Publikation eine Mensch-Roboter-Interaktionsstudie vorgestellt in welcher ein Mensch mit einem nicht-adaptiven Roboter interagiert. In dieser Studie wird jedoch keine Bewegungssynchronisierung zwischen Mensch und Roboter etabliert, was die Notwendigkeit von adaptiven Mechanismen unterstreicht. Daher wird in der dritten Publikation menschliches Adaptationsverhalten in der Bewegungssynchronisierung in zielgerichteten Aufgaben untersucht. Um die so gefundenen Mechanismen für die Mensch-Roboter Interaktion nutzbar zu machen, wird in der vierten Publikation die Entwicklung eines Interaktionsmodells basierend auf Dynamischer Systemtheorie behandelt. Dieses Modell kann direkt in eine Roboterplattform implementiert werden. Anschließend wird kurz auf eine erste Studie zur Mensch- Roboter Interaktion basierend auf dem entwickelten Modell eingegangen. Die letzte Publikation beschreibt eine Weiterentwicklung des bisherigen Vorgehens welche der Tendenz im menschlichen Verhalten Rechnung trägt, die Bewegungen an Ereignissen auszurichten. Hier wird außerdem eine erste Mensch-Roboter- Interaktionsstudie vorgestellt, die die Anwendbarkeit des Modells bestätigt. Die Dissertation wird mit einer Diskussion der präsentierten Ergebnisse im Kontext der Mensch-Roboter-Interaktion und psychologischer Aspekte der Interaktionsforschung sowie der Problematik von beiderseitiger Adaptivität abgeschlossen

    A²ML: A general human-inspired motion language for anthropomorphic arms based on movement primitives

    Get PDF
    The recent increasing demands on accomplishing complicated manipulation tasks necessitate the development of effective task-motion planning techniques. To help understand robot movement intention and avoid causing unease or discomfort to nearby humans toward safe human–robot interaction when these tasks are performed in the vicinity of humans by those robot arms that resemble an anthropomorphic arrangement, a dedicated and unified anthropomorphism-aware task-motion planning framework for anthropomorphic arms is at a premium. A general human-inspired four-level Anthropomorphic Arm Motion Language (A²ML) is therefore proposed for the first time to serve as this framework. First, six hypotheses/rules of human arm motion are extracted from the literature in neurophysiological field, which form the basis and guidelines for the design of A²ML. Inspired by these rules, a library of movement primitives and related motion grammar are designed to build the complete motion language. The movement primitives in the library are designed from two different but associated representation spaces of arm configuration: Cartesian-posture-swivel-angle space and human arm triangle space. Since these two spaces can be always recognized for all the anthropomorphic arms, the designed movement primitives and consequent motion language possess favorable generality. Decomposition techniques described by the A²ML grammar are proposed to decompose complicated tasks into movement primitives. Furthermore, a quadratic programming based method and a sampling based method serve as powerful interfaces for transforming the decomposed tasks expressed in A²ML to the specific joint trajectories of different arms. Finally, the generality and advantages of the proposed motion language are validated by extensive simulations and experiments on two different anthropomorphic arms

    All Hands on Deck: Choosing Virtual End Effector Representations to Improve Near Field Object Manipulation Interactions in Extended Reality

    Get PDF
    Extended reality, or XR , is the adopted umbrella term that is heavily gaining traction to collectively describe Virtual reality (VR), Augmented reality (AR), and Mixed reality (MR) technologies. Together, these technologies extend the reality that we experience either by creating a fully immersive experience like in VR or by blending in the virtual and real worlds like in AR and MR. The sustained success of XR in the workplace largely hinges on its ability to facilitate efficient user interactions. Similar to interacting with objects in the real world, users in XR typically interact with virtual integrants like objects, menus, windows, and information that convolve together to form the overall experience. Most of these interactions involve near-field object manipulation for which users are generally provisioned with visual representations of themselves also called self-avatars. Representations that involve only the distal entity are called end-effector representations and they shape how users perceive XR experiences. Through a series of investigations, this dissertation evaluates the effects of virtual end effector representations on near-field object retrieval interactions in XR settings. Through studies conducted in virtual, augmented, and mixed reality, implications about the virtual representation of end-effectors are discussed, and inferences are made for the future of near-field interaction in XR to draw upon from. This body of research aids technologists and designers by providing them with details that help in appropriately tailoring the right end effector representation to improve near-field interactions, thereby collectively establishing knowledge that epitomizes the future of interactions in XR

    Generating whole body movements for dynamics anthropomorphic systems under constraints

    Get PDF
    Cette thèse étudie la question de la génération de mouvements corps-complet pour des systèmes anthropomorphes. Elle considère le problème de la modélisation et de la commande en abordant la question difficile de la génération de mouvements ressemblant à ceux de l'homme. En premier lieu, un modèle dynamique du robot humanoïde HRP-2 est élaboré à partir de l'algorithme récursif de Newton-Euler pour les vecteurs spatiaux. Un nouveau schéma de commande dynamique est ensuite développé, en utilisant une cascade de programmes quadratiques (QP) optimisant des fonctions coûts et calculant les couples de commande en satisfaisant des contraintes d'égalité et d'inégalité. La cascade de problèmes quadratiques est définie par une pile de tâches associée à un ordre de priorité. Nous proposons ensuite une formulation unifiée des contraintes de contacts planaires et nous montrons que la méthode proposée permet de prendre en compte plusieurs contacts non coplanaires et généralise la contrainte usuelle du ZMP dans le cas où seulement les pieds sont en contact avec le sol. Nous relions ensuite les algorithmes de génération de mouvement issus de la robotique aux outils de capture du mouvement humain en développant une méthode originale de génération de mouvement visant à imiter le mouvement humain. Cette méthode est basée sur le recalage des données capturées et l'édition du mouvement en utilisant le solveur hiérarchique précédemment introduit et la définition de tâches et de contraintes dynamiques. Cette méthode originale permet d'ajuster un mouvement humain capturé pour le reproduire fidèlement sur un humanoïde en respectant sa propre dynamique. Enfin, dans le but de simuler des mouvements qui ressemblent à ceux de l'homme, nous développons un modèle anthropomorphe ayant un nombre de degrés de liberté supérieur à celui du robot humanoïde HRP2. Le solveur générique est utilisé pour simuler le mouvement sur ce nouveau modèle. Une série de tâches est définie pour décrire un scénario joué par un humain. Nous montrons, par une simple analyse qualitative du mouvement, que la prise en compte du modèle dynamique permet d'accroitre naturellement le réalisme du mouvement.This thesis studies the question of whole body motion generation for anthropomorphic systems. Within this work, the problem of modeling and control is considered by addressing the difficult issue of generating human-like motion. First, a dynamic model of the humanoid robot HRP-2 is elaborated based on the recursive Newton-Euler algorithm for spatial vectors. A new dynamic control scheme is then developed adopting a cascade of quadratic programs (QP) optimizing the cost functions and computing the torque control while satisfying equality and inequality constraints. The cascade of the quadratic programs is defined by a stack of tasks associated to a priority order. Next, we propose a unified formulation of the planar contact constraints, and we demonstrate that the proposed method allows taking into account multiple non coplanar contacts and generalizes the common ZMP constraint when only the feet are in contact with the ground. Then, we link the algorithms of motion generation resulting from robotics to the human motion capture tools by developing an original method of motion generation aiming at the imitation of the human motion. This method is based on the reshaping of the captured data and the motion editing by using the hierarchical solver previously introduced and the definition of dynamic tasks and constraints. This original method allows adjusting a captured human motion in order to reliably reproduce it on a humanoid while respecting its own dynamics. Finally, in order to simulate movements resembling to those of humans, we develop an anthropomorphic model with higher number of degrees of freedom than the one of HRP-2. The generic solver is used to simulate motion on this new model. A sequence of tasks is defined to describe a scenario played by a human. By a simple qualitative analysis of motion, we demonstrate that taking into account the dynamics provides a natural way to generate human-like movements

    The State of Lifelong Learning in Service Robots: Current Bottlenecks in Object Perception and Manipulation

    Get PDF
    Service robots are appearing more and more in our daily life. The development of service robots combines multiple fields of research, from object perception to object manipulation. The state-of-the-art continues to improve to make a proper coupling between object perception and manipulation. This coupling is necessary for service robots not only to perform various tasks in a reasonable amount of time but also to continually adapt to new environments and safely interact with non-expert human users. Nowadays, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object in predefined settings. Besides, in most of the cases, there is a reliance on large amounts of training data. Therefore, the knowledge of such robots is fixed after the training phase, and any changes in the environment require complicated, time-consuming, and expensive robot re-programming by human experts. Therefore, these approaches are still too rigid for real-life applications in unstructured environments, where a significant portion of the environment is unknown and cannot be directly sensed or controlled. In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects. Therefore, apart from batch learning, the robot should be able to continually learn about new object categories and grasp affordances from very few training examples on-site. Moreover, apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition by teaching new concepts, or by correcting insufficient or erroneous concepts. In this way, the robot will constantly learn how to help humans in everyday tasks by gaining more and more experiences without the need for re-programming

    Annotated Bibliography: Anticipation

    Get PDF

    Toward Robots with Peripersonal Space Representation for Adaptive Behaviors

    Get PDF
    The abilities to adapt and act autonomously in an unstructured and human-oriented environment are necessarily vital for the next generation of robots, which aim to safely cooperate with humans. While this adaptability is natural and feasible for humans, it is still very complex and challenging for robots. Observations and findings from psychology and neuroscience in respect to the development of the human sensorimotor system can inform the development of novel approaches to adaptive robotics. Among these is the formation of the representation of space closely surrounding the body, the Peripersonal Space (PPS) , from multisensory sources like vision, hearing, touch and proprioception, which helps to facilitate human activities within their surroundings. Taking inspiration from the virtual safety margin formed by the PPS representation in humans, this thesis first constructs an equivalent model of the safety zone for each body part of the iCub humanoid robot. This PPS layer serves as a distributed collision predictor, which translates visually detected objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities of a collision between those objects and body parts. This leads to adaptive avoidance behaviors in the robot via an optimization-based reactive controller. Notably, this visual reactive control pipeline can also seamlessly incorporate tactile input to guarantee safety in both pre- and post-collision phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components, namely the PPS, the multi-target motion planner (for manipulation reaching tasks), the reaching-with-avoidance controller and the humancentred visual perception, are combined harmoniously to form a hybrid control framework designed to provide safety for robots\u2019 interactions in a cluttered environment shared with human partners. Later, motivated by the development of manipulation skills in infants, in which the multisensory integration is thought to play an important role, a learning framework is proposed to allow a robot to learn the processes of forming sensory representations, namely visuomotor and visuotactile, from their own motor activities in the environment. Both multisensory integration models are constructed with Deep Neural Networks (DNNs) in such a way that their outputs are represented in motor space to facilitate the robot\u2019s subsequent actions

    Motion Planning : from Digital Actors to Humanoid Robots

    Get PDF
    Le but de ce travail est de développer des algorithmes de planification de mouvement pour des figures anthropomorphes en tenant compte de la géométrie, de la cinématique et de la dynamique du mécanisme et de son environnement. Par planification de mouvement, on entend la capacité de donner des directives à un niveau élevé et de les transformer en instructions de bas niveau qui produiront une séquence de valeurs articulaires qui reproduissent les mouvements humains. Ces instructions doivent considérer l'évitement des obstacles dans un environnement qui peut être plus au moins contraint. Ceci a comme consequence que l'on peut exprimer des directives comme “porte ce plat de la table jusqu'ac'estu coin du piano”, qui seront ensuite traduites en une série de buts intermédiaires et de contraintes qui produiront les mouvements appropriés des articulations du robot, de façon a effectuer l'action demandée tout en evitant les obstacles dans la chambre. Nos algorithmes se basent sur l'observation que les humains ne planifient pas des mouvements précis pour aller à un endroit donné. On planifie grossièrement la direction de marche et, tout en avançant, on exécute les mouvements nécessaires des articulations afin de nous mener à l'endroit voulu. Nous avons donc cherché à concevoir des algorithmes au sein d'un tel paradigme, algorithmes qui: 1. Produisent un chemin sans collision avec une version réduite du mécanisme et qui le mènent au but spécifié. 2. Utilisent les contrôleurs disponibles pour générer un mouvement qui assigne des valeurs à chacune des articulations du mécanisme pour suivre le chemin trouvé précédemment. 3. Modifient itérativement ces trajectoires jusqu'à ce que toutes les contraintes géométriques, cinématiques et dynamiques soient satisfaites. Dans ce travail nous appliquons cette approche à trois étages au problème de la planification de mouvements pour des figures anthropomorphes qui manipulent des objets encombrants tout en marchant. Dans le processus, plusieurs problèmes intéressants, ainsi que des propositions pour les résoudre, sont présentés. Ces problèmes sont principalement l'évitement tri-dimensionnel des obstacles, la manipulation des objets à deux mains, la manipulation coopérative des objets et la combinaison de comportements hétérogènes. La contribution principale de ce travail est la modélisation du problème de la génération automatique des mouvements de manipulation et de locomotion. Ce modèle considère les difficultés exprimées ci dessus, dans les contexte de mécanismes bipèdes. Trois principes fondent notre modèle: une décomposition fonctionnelle des membres du mécanisme, un modèle de manipulation coopérative et, un modéle simplifié des facultés de déplacement du mécanisme dans son environnement.Ce travail est principalement et surtout, un travail de synthèse. Nous nous servons des techniques disponibles pour commander la locomotion des mécanismes bipèdes (contrôleurs) provenant soit de l'animation par ordinateur, soit de la robotique humanoïde, et nous les relions dans un planificateur des mouvements original. Ce planificateur de mouvements est agnostique vis-à-vis du contrôleur utilisé, c'est-à-dire qu'il est capable de produire des mouvements libres de collision avec n'importe quel contrôleur tandis que les entrées et sorties restent compatibles. Naturellement, l'exécution de notre planificateur dépend en grand partie de la qualité du contrôleur utilisé. Dans cette thèse, le planificateur de mouvement est relié à différents contrôleurs et ses bonnes performances sont validées avec des mécanismes différents, tant virtuels que physiques. Ce travail à été fait dans le cadre des projets de recherche communs entre la France, la Russie et le Japon, où nous avons fourni le cadre de planification de mouvement à ses différents contrôleurs. Plusieurs publications issues de ces collaborations ont été présentées dans des conférences internationales. Ces résultats sont compilés et présentés dans cette thèse, et le choix des techniques ainsi que les avantages et inconvénients de notre approche sont discutés. ABSTRACT : The goal of this work is to develop motion planning algorithms for human-like figures taking into account the geometry, kinematics and dynamics of the mechanism and its environment. By motion planning it is understood the ability to specify high-level directives and transform them into low-level instructions for the articulations of the human-like figure. This is usually done while considering obstacle avoidance within the environment. This results in one being able to express directives as “carry this plate from the table to the piano corner” and have them translate into a series of goals and constraints that result in the pertinent motions from the robot's articulations in such a way as to carry out the action while avoiding collisions with the obstacles in the room. Our algorithms are based on the observation that humans do not plan their exact motions when getting to a location. We roughly plan our direction and, as we advance, we execute the motions needed to get to the desired place. This has led us to design algorithms that: 1. Produce a rough collision free path that takes a simplified model of the mechanism to the desired location. 2. Use available controllers to generate a trajectory that assigns values to each of the mechanism's articulations to follow the path. 3. Modify iteratively these trajectories until all the geometric, kinematic and dynamic constraints of the problem are satisfied.Throughout this work, we apply this three-stage approach with the problem of generating motions for human-like figures that manipulate bulky objects while walking. In the process, several interesting problems and their solution are brought into focus. These problems are, three- imensional collision avoidance, two-hand object manipulation, cooperative manipulation among several characters or robots and the combination of different behaviors. The main contribution of this work is the modeling of the automatic generation of cooperative manipulation motions. This model considers the above difficulties, all in the context of bipedal walking mechanisms. Three principles inform the model: a functional decomposition of the mechanism's limbs, a model for cooperative manipulation and, a simplified model to represent the mechanism when generating the rough path. This work is mainly and above all, one of synthesis. We make use of available techniques for controlling locomotion of bipedal mechanisms (controllers), from the fields of computer graphics and robotics, and connect them to a novel motion planner. This motion planner is controller-agnostic, that is, it is able to produce collision-free motions with any controller, despite whatever errors introduced by the controller itself. Of course, the performance of our motion planner depends on the quality of the used controller. In this thesis, the motion planner, connected to different controllers, is used and tested in different mechanisms, both virtual and physical. This in the context of different research projects in France, Russia and Japan, where we have provided the motion planning framework to their controllers. Several papers in peer-reviewed international conferences have resulted from these collaborations. The present work compiles these results and provides a more comprehensive and detailed depiction of the system and its benefits, both when applied to different mechanisms and compared to alternative approache
    • …
    corecore