8 research outputs found

    Context-aware Mission Control for Astronaut-Robot Collaboration

    Get PDF
    Space robot assistants are envisaged as semi-autonomous co-workers deployed to lighten the workload of astronauts in cumbersome and dangerous situations. In view of this, this work considers the prospects on the technology requirements for future space robot operations, by presenting a novel mission control concept for close astronaut-robot collaboration. A decentralized approach is proposed, in which an astronaut is put in charge of commanding the robot, and a mission control center on Earth maintains a list of authorized robot actions by applying symbolic, geometric, and context-specific filters. The concept is applied to actual space robot operations within the METERON SUPVIS Justin experiment. In particular, it is shown how the concept is utilized to guide an astronaut aboard the ISS in its mission to survey and maintain a solar panel farm in a simulated Mars environment

    A Knowledge-Driven Shared Autonomy Human-Robot Interface for Tablet Computers

    Get PDF
    Autonomous and teleoperated robots have been proven to be capable of solving complex manipulation tasks. However, autonomous robots can only be deployed in predefined domains and teleoperation requires full attention of a human operator. Therefore, combining autonomous capabilities of the robot with teleoperation is desirable, yet balancing the work load between robot and operator for intuitive human-robot interfaces is still an open issue. In this paper, we present a knowledge-driven tablet computer application for commanding a robot on a high level of abstraction. The application guides an operators decisions based on the actual world state of the robot and enables the operator to command object-centered actions, which are autonomously interpreted symbolically and geometrically by the robot. We evaluate our approach using the humanoid robot Rollin’ Justin for an elaborate manipulation experiment and an user study. We show that our approach efficiently balances the work load between robot and operator and provides an intuitive interface for human-robot interaction

    Robotics Platforms Incorporating Manipulators Having Common Joint Designs

    Get PDF
    Manipulators in accordance with various embodiments of the invention can be utilized to implement statically stable robots capable of both dexterous manipulation and versatile mobility. Manipulators in accordance with one embodiment of the invention include: an azimuth actuator; three elbow joints that each include two actuators that are offset to allow greater than 360 degree rotation of each joint; a first connecting structure that connects the azimuth actuator and a first of the three elbow joints; a second connecting structure that connects the first elbow joint and a second of the three elbow joints; a third connecting structure that connects the second elbow joint to a third of the three elbow joints; and an end-effector interface connected to the third of the three elbow joints

    Model-Augmented Haptic Telemanipulation: Concept, Retrospective Overview, and Current Use Cases

    Get PDF
    Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges

    An Object Template Approach to Manipulation for Semi-autonomous Avatar Robots

    Get PDF
    Nowadays, the first steps towards the use of mobile robots to perform manipulation tasks in remote environments have been made possible. This opens new possibilities for research and development, since robots can help humans to perform tasks in many scenarios. A remote robot can be used as avatar in applications such as for medical or industrial use, in rescue and disaster recovery tasks which might be hazardous environments for human beings to enter, as well as for more distant scenarios like planetary explorations. Among the most typical applications in recent years, research towards the deployment of robots to mitigate disaster scenarios has been of great interest in the robotics field. Disaster scenarios present challenges that need to be tackled. Their unstructured nature makes them difficult to predict and even though some assumptions can be made for human-designed scenarios, there is no certainty on the expected conditions. Communications with a robot inside these scenarios might also be challenged; wired communications limit reachability and wireless communications are limited by bandwidth. Despite the great progress in the robotics research field, these difficulties have prevented the current autonomous robotic approaches to perform efficiently in unstructured remote scenarios. On one side, acquiring physical and abstract information from unknown objects in a full autonomous way in uncontrolled environmental conditions is still an unsolved problem. Several challenges have to be overcome such as object recognition, grasp planning, manipulation, and mission planning among others. On the other side, purely teleoperated robots require a reliable communication link robust to reachability, bandwidth, and latency which can provide all the necessary feedback that a human operator needs in order to achieve sufficiently good situational awareness, e.g., worldmodel, robot state, forces, and torques exerted. Processing this amount of information plus the necessary training to perform joint motions with the robot represent a high mental workload for the operator which results in very low execution times. Additionally, a pure teleoperated approach is error-prone given that the success in a manipulation task strongly depends on the ability and expertise of the human operating the robot. Both, autonomous and teleoperated robotic approaches have pros and cons, for this reason a middle ground approach has emerged. In an approach where a human supervises a semi-autonomous remote robot, strengths from both, full autonomous and purely teleoperated approaches can be combined while at the same time their weaknesses can be tackled. A remote manipulation task can be divided into sub-tasks such as planning, perception, action, and evaluation. A proper distribution of these sub-tasks between the human operator and the remote robot can increase the efficiency and potential of success in a manipulation task. On the one hand, a human operator can trivially plan a task (planning), identify objects in the sensor data acquired by the robot (perception), and verify the completion of a task (evaluation). On the other hand, it is challenging to remotely control in joint space a robotic system like a humanoid robot that can easily have over 25 degrees of freedom (DOF). For this reason, in this approach the complex sub-tasks such as motion planning, motion execution, and obstacle avoidance (action) are performed autonomously by the remote robot. With this distribution of tasks, the challenge of converting the operator intent into a robot action arises. This thesis investigates concepts of how to efficiently provide a remote robot with the operator intent in a flexible means of interaction. While current approaches focus on an object-grasp-centered means of interaction, this thesis aims at providing physical and abstract properties of the objects of interest. With this information, the robot can perform autonomous subtasks like locomotion through the environment, grasping objects, and manipulating them at an affordance-level avoiding collisions with the environment in order to efficiently accomplish the manipulation task needed. For this purpose, the concept of Object Template (OT) has been developed in this thesis. An OT is a virtual representation of an object of interest that contains information that a remote robot can use to manipulate such object or other similar objects. The object template concept presented here goes beyond state-of-the-art related concepts by extending the robot capabilities to use affordance information of the object. This concept includes physical information (mass, center of mass, inertia tensor) as well as abstract information (potential grasps, affordances, and usabilities). Because humans are very good at analysing a situation, planning new ways of how to solve a task, even using objects for different purposes, it is important to allow communicating the planning and perception performed by the operator such that the robot can execute the action based on the information contained in the OT. This leverages human intelligence with robot capabilities. For example, as an implementation in a 3D environment, an OT can be visualized as a 3D geometry mesh that simulates an object of interest. A human operator can manipulate the OT and move it so that it overlaps with the visualized sensor data of the real object. Information of the object template type and its pose can be compressed and sent using low bandwidth communication. Then, the remote robot can use the information of the OT to approach, grasp, and manipulate the real object. The use of remote humanoid robots as avatars is expected to be intuitive to operators (or potential human response forces) since the kinematic chains and degrees of freedom are similar to humans. This allows operators to visualize themselves in the remote environment and think how to solve a task, however, task requirements such as special tools might not be found. For this reason, a flexible means of interaction that can account for allowing improvisation from the operator is also needed. In this approach, improvisation is described as "a change of a plan on how to achieve a certain task, depending on the current situation". A human operator can then improvise by adapting the affordances of known objects into new unknown objects. For example, by utilizing the affordances defined in an OT on a new object that has similar physical properties or which manipulation skills belong to the same class. The experimental results presented in this thesis validate the proposed approach by demonstrating the successful achievement of several manipulation tasks using object templates. Systematic laboratory experimentation has been performed to evaluate the individual aspects of this approach. The performance of the approach has been tested in three different humanoid robotic systems (one of these robots belongs to another research laboratory). These three robotic platforms also participated in the renowned international competition DARPA Robotics Challenge (DRC) which between 2012 and 2015 was considered the most ambitious and challenging robotic competition

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen
    corecore