171 research outputs found

    An Object Template Approach to Manipulation for Semi-autonomous Avatar Robots

    Get PDF
    Nowadays, the first steps towards the use of mobile robots to perform manipulation tasks in remote environments have been made possible. This opens new possibilities for research and development, since robots can help humans to perform tasks in many scenarios. A remote robot can be used as avatar in applications such as for medical or industrial use, in rescue and disaster recovery tasks which might be hazardous environments for human beings to enter, as well as for more distant scenarios like planetary explorations. Among the most typical applications in recent years, research towards the deployment of robots to mitigate disaster scenarios has been of great interest in the robotics field. Disaster scenarios present challenges that need to be tackled. Their unstructured nature makes them difficult to predict and even though some assumptions can be made for human-designed scenarios, there is no certainty on the expected conditions. Communications with a robot inside these scenarios might also be challenged; wired communications limit reachability and wireless communications are limited by bandwidth. Despite the great progress in the robotics research field, these difficulties have prevented the current autonomous robotic approaches to perform efficiently in unstructured remote scenarios. On one side, acquiring physical and abstract information from unknown objects in a full autonomous way in uncontrolled environmental conditions is still an unsolved problem. Several challenges have to be overcome such as object recognition, grasp planning, manipulation, and mission planning among others. On the other side, purely teleoperated robots require a reliable communication link robust to reachability, bandwidth, and latency which can provide all the necessary feedback that a human operator needs in order to achieve sufficiently good situational awareness, e.g., worldmodel, robot state, forces, and torques exerted. Processing this amount of information plus the necessary training to perform joint motions with the robot represent a high mental workload for the operator which results in very low execution times. Additionally, a pure teleoperated approach is error-prone given that the success in a manipulation task strongly depends on the ability and expertise of the human operating the robot. Both, autonomous and teleoperated robotic approaches have pros and cons, for this reason a middle ground approach has emerged. In an approach where a human supervises a semi-autonomous remote robot, strengths from both, full autonomous and purely teleoperated approaches can be combined while at the same time their weaknesses can be tackled. A remote manipulation task can be divided into sub-tasks such as planning, perception, action, and evaluation. A proper distribution of these sub-tasks between the human operator and the remote robot can increase the efficiency and potential of success in a manipulation task. On the one hand, a human operator can trivially plan a task (planning), identify objects in the sensor data acquired by the robot (perception), and verify the completion of a task (evaluation). On the other hand, it is challenging to remotely control in joint space a robotic system like a humanoid robot that can easily have over 25 degrees of freedom (DOF). For this reason, in this approach the complex sub-tasks such as motion planning, motion execution, and obstacle avoidance (action) are performed autonomously by the remote robot. With this distribution of tasks, the challenge of converting the operator intent into a robot action arises. This thesis investigates concepts of how to efficiently provide a remote robot with the operator intent in a flexible means of interaction. While current approaches focus on an object-grasp-centered means of interaction, this thesis aims at providing physical and abstract properties of the objects of interest. With this information, the robot can perform autonomous subtasks like locomotion through the environment, grasping objects, and manipulating them at an affordance-level avoiding collisions with the environment in order to efficiently accomplish the manipulation task needed. For this purpose, the concept of Object Template (OT) has been developed in this thesis. An OT is a virtual representation of an object of interest that contains information that a remote robot can use to manipulate such object or other similar objects. The object template concept presented here goes beyond state-of-the-art related concepts by extending the robot capabilities to use affordance information of the object. This concept includes physical information (mass, center of mass, inertia tensor) as well as abstract information (potential grasps, affordances, and usabilities). Because humans are very good at analysing a situation, planning new ways of how to solve a task, even using objects for different purposes, it is important to allow communicating the planning and perception performed by the operator such that the robot can execute the action based on the information contained in the OT. This leverages human intelligence with robot capabilities. For example, as an implementation in a 3D environment, an OT can be visualized as a 3D geometry mesh that simulates an object of interest. A human operator can manipulate the OT and move it so that it overlaps with the visualized sensor data of the real object. Information of the object template type and its pose can be compressed and sent using low bandwidth communication. Then, the remote robot can use the information of the OT to approach, grasp, and manipulate the real object. The use of remote humanoid robots as avatars is expected to be intuitive to operators (or potential human response forces) since the kinematic chains and degrees of freedom are similar to humans. This allows operators to visualize themselves in the remote environment and think how to solve a task, however, task requirements such as special tools might not be found. For this reason, a flexible means of interaction that can account for allowing improvisation from the operator is also needed. In this approach, improvisation is described as "a change of a plan on how to achieve a certain task, depending on the current situation". A human operator can then improvise by adapting the affordances of known objects into new unknown objects. For example, by utilizing the affordances defined in an OT on a new object that has similar physical properties or which manipulation skills belong to the same class. The experimental results presented in this thesis validate the proposed approach by demonstrating the successful achievement of several manipulation tasks using object templates. Systematic laboratory experimentation has been performed to evaluate the individual aspects of this approach. The performance of the approach has been tested in three different humanoid robotic systems (one of these robots belongs to another research laboratory). These three robotic platforms also participated in the renowned international competition DARPA Robotics Challenge (DRC) which between 2012 and 2015 was considered the most ambitious and challenging robotic competition

    Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator

    Get PDF
    Published in cooperation with the American Bar Association Section of Dispute Resolutio

    Artificial Intelligence: Robots, Avatars and the Demise of the Human Mediator

    Get PDF
    As technology has advanced, many have wondered whether (or simply when) artificial intelligent devices will replace the humans who perform complex, interactive, interpersonal tasks such as dispute resolution. Has science now progressed to the point that artificial intelligence devices can replace human mediators, arbitrators, dispute resolvers and problem solvers? Can humanoid robots, attractive avatars and other relational agents create the requisite level of trust and elicit the truthful, perhaps intimate or painful, disclosures often necessary to resolve a dispute or solve a problem? This article will explore these questions. Regardless of whether the reader is convinced that the demise of the human mediator or arbitrator is imminent, one cannot deny that artificial intelligence now has the capability to assume many of the responsibilities currently being performed by alternative dispute resolution (ADR) practitioners. It is fascinating (and perhaps unsettling) to realize the complexity and seriousness of tasks currently delegated to avatars and robots. This article will review some of those delegations and suggest how the artificial intelligence developed to complete those assignments may be relevant to dispute resolution and problem solving. “Relational Agents,” which can have a physical presence such as a robot, be embodied in an avatar, or have no detectable form whatsoever and exist only as software, are able to create long term socio-economic relationships with users built on trust, rapport and therapeutic goals. Relational agents are interacting with humans in circumstances that have significant consequences in the physical world. These interactions provide insights as to how robots and avatars can participate productively in dispute resolution processes. Can human mediators and arbitrators be replaced by robots and avatars that not only physically resemble humans, but also act, think, and reason like humans? And to raise a particularly interesting question, can robots, avatars and other relational agents look, move, act, think, and reason even “better” than humans

    Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator

    Get PDF
    Published in cooperation with the American Bar Association Section of Dispute Resolutio

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    Towards an understanding of humanoid robots in eLC applications

    Get PDF

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Uncanniliy Human - Experimental Investigation of the Uncanny Valley Phenomenon

    Get PDF
    Seit seiner Einführung in den wissenschaftlichen Diskurs im Jahr 1970 (Mori, 1970; Mori et al., 2012) ist das Uncanny Valley eine der meist diskutierten und referenzierten Theorien in der Robotik. Obwohl die Theorie vor mehr als 40 Jahren postuliert wurde, wurde sie kaum empirisch untersucht. Erst in den letzten sieben Jahren haben Wissenschaftler aus dem Bereich Robotik, aber auch aus anderen Disziplinen, angefangen, das Uncanny Valley systematischer zu erforschen. Allerdings blieben bisher viele Fragen offen. Einiger dieser Fragen wurden in dem vorliegenden Forschungsprojekt im Rahmen von vier aufeinander aufbauenden Studien untersucht. Der Schwerpunkt der Arbeit liegt auf der systematischen Untersuchung des Einflusses von statischen und dynamischen Merkmalen von Robotern, wie etwa dem Design bzw. Erscheinungsbild und der Bewegung, auf die Wahrnehmung und Evaluation von diesen Robotern. Eine Besonderheit der vorliegenden Arbeit ist der multi-methodologische Ansatz, bei dem die durch verschiedenste Methoden und Messinstrumente beobachteten Effekte auf ihre Relevanz für die Uncanny Valley Theorie hin untersucht wurden. Zudem wurden die in der bisherigen Literatur postulierten Erklärungsansätze für den Uncanny Valley Effekt empirisch getestet. In der ersten Studie wurde anhand von qualitativen Interviews, in denen Probanden Bilder und Videos von humanoiden und androiden Robotern gezeigt wurden, untersucht, wie Probanden sehr menschenähnliche Roboter evaluieren, ob sie emotionale Reaktionen zeigen, und wie ihre Einstellungen gegenüber diesen Robotern sind. Die Ergebnisse zeigen, dass emotionale Reaktion, wenn überhaupt vorhanden, individuell sehr verschieden ausfallen. Das Erscheinungsbild der Roboter war sehr wichtig, denn bestimmte Designmerkmale wurden mit bestimmten Fähigkeiten gleichgesetzt. Ein menschliches Erscheinungsbild ohne Funktionalität wurde eher negativ bewertet. Zudem schienen die Probanden bei androiden Robotern dieselben Maßstäbe zur Bewertung von Attraktivität anzulegen wie sie dies bei Menschen tun. Die Analyse zeigte auch die Relevanz der Bewegungen der Roboter und des Kontextes, in welchem der jeweilige Roboter präsentiert wurde. Es wurde erste Evidenz gefunden für die Annahme, dass Menschen Unsicherheit verspüren bei der Kategorisierung von androiden Robotern als entweder Roboter oder Mensch. Zudem fühlten sich die Probanden unwohl bei dem Gedanken, dass Roboter sie ersetzten könnten. Die zweite Studie untersuchte den Einfluss von robotischer Bewegung. In einem quasi-experimentellen Feldexperiment wurden Passanten mit dem androiden Roboter Geminoid HI-1 konfrontiert, der sich entweder still verhielt oder Bewegungsverhalten zeigte. Die Interaktionen wurden analysiert hinsichtlich des nonverbalen Verhaltens der Passanten (z.B. auf den Roboter gerichtete Aufmerksamkeit, interpersonale Distanz zum Roboter). Die Resultate zeigen, dass das Verhalten der Passanten von dem Verhalten des Roboters beeinflusst wurde, zum Beispiel waren die Interaktionen länger, die Probanden stellten mehr Blickkontakt her und testeten die Fähigkeiten des Roboters wenn dieser Bewegungsverhalten zeigte. Zudem diente das Verhalten des Roboters als Hinweisreiz für die richtige Kategorisierung des Roboters als solchen. Der Aspekt des Erscheinungsbildes wurde in der dritten Studie systematisch untersucht. Zu diesem Zweck wurden in einem webbasierten Fragebogen 40 standardisierte Bilder von Robotern evaluiert, um die Evaluation beeinflussende Designmerkmale zu identifizieren. Eine Clusteranalyse ergab sechs Cluster von Robotern, die auf sechs Dimensionen unterschiedlich bewertet wurden. Mögliche Beziehungen zwischen Designmerkmalen und Evaluationen der Cluster wurden aufgezeigt und diskutiert. Zudem wurde die Aussagekraft des Uncanny Valley Graphen untersucht. Ausgehend von Mori’s Überlegungen ist der Uncanny Valley Effekt eine kubische Funktion. Demnach müssten sich die Daten am besten durch eine kubische Funktion erklären lassen. Die Ergebnisse zeigten allerdings eine bessere Modellpassung für lineare oder quadratische Zusammenhänge. In der letzten Studie wurden perzeptions-orientiert und evolutionsbiologische Erklärungsansätze für das Uncanny Valley systematisch getestet. In dieser Studie wurden Daten aus Selbstauskunft, Verhaltensdaten und funktionelle Bildgebung kombiniert, um zu untersuchen ob sich die Effekte auf Basis der Selbstauskunft und der Verhaltensdaten erklären lassen durch a) zusätzliche Verarbeitungsleistung während der Perzeption von Gesichtern, b) automatisch ablaufende Prozesse sozialer Kognition, oder c) eine Überempfindlichkeit des sogenannten Verhaltensimmunsystems (behavioral immune system). Die Ergebnisse unterstützen die perzeptions-orientierten Erklärungen für den Uncanny Valley Effekt. Zum einen scheinen die Verhaltenseffekte durch neuronale Prozesse während der Wahrnehmung von Gesichtern begründet zu sein. Zum anderen gibt es Befunde, die auf eine kategoriale Wahrnehmung von Robotern und Menschen hinweisen. Evolutionsbiologische Erklärungen konnten durch die vorliegenden Daten nicht gestützt werden.Since its introduction into scientific discourse in 1970 (Mori, 1970; Mori et al., 2012) the uncanny valley has been a highly discussed and referenced theory in the field of robotics. Although the theory was postulated more than 40 years ago, it has barely been tested empirically. However, in the last seven years robot scientists addressed themselves to the task of investigating the uncanny valley more systematically. But there are still open questions, some of which have been addressed within this research in the course of four consecutive studies. This project focussed on the systematic investigation of how static and dynamic characteristics of robots such as appearance and movement determine evaluations of and behavior towards robots. The work applied a multi-methodological approach and the various observed effects were examined with regard to their importance for the assumed uncanny valley. In addition, previously proposed explanations for the uncanny valley effect were tested. The first study utilized qualitative interviews in which participants were presented with pictures and videos of humanoid and android robots to explore participants’ evaluations of very human-like robots, their attitudes about these robots, and their emotional reactions towards these robots. Results showed that emotional experiences, if existent, were very individual. The robots’ appearance was of great importance for the participants, because certain characteristics were equalized with certain abilities, merely human appearance without a connected functionality was not appreciated, and human rules of attractiveness were applied to the android robots. The analysis also demonstrated the importance of the robots’ movements and the social context they were placed in. First evidence was found supporting the assumption that participants experienced uncertainty how to categorize android robots (as human or machine) and that they felt uncomfortable at the thought to be replaced by robots. The influence of movement, as one of the important factors in the uncanny valley hypothesis, was examined in the second study. In a quasi-experimental observational field study people were confronted with the android robot Geminoid HI-1 either moving or not moving. These interactions between humans and the android robot were analyzed with regard to the participants’ nonverbal behavior (e.g. attention paid to the robot, proximity). Results show that participants’ behavior towards the android robot was influenced by the behavior the robot displayed. For instance, when the robot established eye-contact participants engaged in longer interactions, also established more eye-contact and tried to test the robots’ capabilities. The robot’s behavior served as cue for the participants to categorize the robot as such. The aspect of robot appearances was examined systematically in the third study in order to identify certain robot attractiveness indices or design characteristics which determine how people perceive robots. A web-based survey was conducted with standardized pictures of 40 different mechanoid, humanoid and android robots. A cluster analysis revealed six clusters of robots which were rated significantly different on six dimensions. Possible relationships of design characteristics and the evaluation of robots have been outlined. Moreover, it has been tested whether the data of this study can best be explained by a cubic funtion as would be suggested by the graph proposed by Mori. Results revealed that the data can be best explained by linear or quadratic relationships. The last study systematically tested perception-oriented and evolutionary-biological approaches for the uncanny valley. In this multi-methodological study, self-report and behavioral data were combined with functional magnetic resonance imaging techniques in order to examine whether the observed effects in self-report and behavior occur due to a) additional processing during face perception of human and robotic stimuli, b) automatically elicited processes of social cognition, or c) oversensitivity of the behavioral immune system. The study found strong support for perception-oriented explanations for the uncanny valley effect. First, effects seem to be driven by face perception processes. Further, there were indicators for the assumption that categorical perception takes place. In the contrary, evolutionary-biological driven explanations assuming that uncanny valley related reactions are due to oversensitivity of the behavioral immune system were not supported by this work. Altogether, this dissertation explored the importance of characteristics of robots which are relevant for the uncanny valley hypothesis. Uncanny valley related responses were examined using a variety of measures, for instance, self-reporting, behavior, and brain activation, allowing conclusions with regard to the influence of the choice of measurements on the detection of uncanny valley related responses. Most importantly, explanations for the uncanny valley were tested systematically and support was found for cognitive-oriented and perception-oriented explanations
    • …
    corecore