6 research outputs found

    Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation

    Get PDF
    To become helpful assistants in our daily lives, robots must be able to understand the effects of their actions on their environment. A modern approach to this is the use of a physics simulation, where often very general simulation engines are utilized. As a result, specific modeling features, such as multi-contact simulation or fluid dynamics, may not be well represented. To improve the representativeness of simulations, we propose a framework for combining estimations of multiple heterogeneous simulations into a single one. The framework couples multiple simulations and reorganizes them based on semantically annotated action sequence information. While each object in the scene is always covered by a simulation, this simulation responsibility can be reassigned on-line. In this paper, we introduce the concept of the framework, describe the architecture, and demonstrate two example implementations. Eventually, we demonstrate how the framework can be used to simulate action executions on the humanoid robot Rollin' Justin with the goal to extract the semantic state and how this information is used to assess whether an action sequence is executed successful or not

    Audio Perception in Robotic Assistance for Human Space Exploration: A Feasibility Study

    Get PDF
    Future crewed missions beyond low earth orbit will greatly rely on the support of robotic assistance platforms to perform inspection and manipulation of critical assets. This includes crew habitats, landing sites or assets for life support and operation. Maintenance and manipulation of a crewed site in extra-terrestrial environments is a complex task and the system will have to face different challenges during operation. While most may be solved autonomously, in certain occasions human intervention will be required. The telerobotic demonstration mission, Surface Avatar, led by the German Aerospace Center (DLR), with partner European Space Agency (ESA), investigates different approaches offering astronauts on board the International Space Station (ISS) control of ground robots in representative scenarios, e.g. a Martian landing and exploration site. In this work we present a feasibility study on how to integrate auditory information into the mentioned application. We will discuss methods for obtaining audio information and localizing audio sources in the environment, as well as fusing auditory and visual information to perform state estimation based on the gathered data. We demonstrate our work in different experiments to show the effectiveness of utilizing audio information, the results of spectral analysis of our mission assets, and how this information could help future astronauts to argue about the current mission situation

    Extending the Knowledge Driven Approach for Scalable Autonomy Teleoperation of a Robotic Avatar

    Get PDF
    Crewed missions to celestial bodies such as Moon and Mars are in the focus of an increasing number of space agencies. Precautions to ensure a safe landing of the crew on the extraterrestrial surface, as well as reliable infrastructure on the remote location, for bringing the crew back home are key considerations for mission planning. The European Space Agency (ESA) identified in its Terrae Novae 2030+ roadmap, that robots are needed as precursors and scouts to ensure the success of such missions. An important role these robots will play, is the support of the astronaut crew in orbit to carry out scientific work, and ultimately ensuring nominal operation of the support infrastructure for astronauts on the surface. The METERON SUPVIS Justin ISS experiments demonstrated that supervised autonomy robot command can be used for executing inspection, maintenance and installation tasks using a robotic co-worker on the planetary surface. The knowledge driven approach utilized in the experiments only reached its limits when situations arise that were not anticipated by the mission design. In deep space scenarios, the astronauts must be able to overcome these limitations. An approach towards more direct command of a robot was demonstrated in the METERON ANALOG-1 ISS experiment. In this technical demonstration, an astronaut used haptic telepresence to command a robotic avatar on the surface to execute sampling tasks. In this work, we propose a system that combines supervised autonomy and telepresence by extending the knowledge driven approach. The knowledge management is based on organizing the prior knowledge of the robot in an object-centered context. Action Templates are used to define the knowledge on the handling of the objects on a symbolic and geometric level. This robot-agnostic system can be used for supervisory command of any robotic coworker. By integrating the robot itself as an object into the object-centered domain, robot-specific skills and (tele-)operation modes can be injected into the existing knowledge management system by formulating respective Action Templates. In order to efficiently use advanced teleoperation modes, such as haptic telepresence, a variety of input devices are integrated into the proposed system. This work shows how the integration of these devices is realized in a way that is agnostic to the input devices and operation modes. The proposed system is evaluated in the Surface Avatar ISS experiment. This work shows how the system is integrated into a Robot Command Terminal featuring a 3-Degree-of-Freedom Joystick and a 7-Degree-of-Freedom haptic input device in the Columbus module of the ISS. In the preliminary experiment sessions of Surface Avatar, two astronauts on orbit took command of the humanoid service robot Rollin' Justin in Germany. This work presents and discusses the results of these ISS-to-ground sessions and derives requirements for extending the scalable autonomy system for the use with a heterogeneous robotic team

    On Realizing Multi-Robot Command through Extending the Knowledge Driven Teleoperation Approach

    Get PDF
    Future crewed planetary missions will strongly depend on the support of crew-assistance robots for setup and inspection of critical assets, such as return vehicles, before and after crew arrival. To efficiently accomplish a high variety of tasks, we envision the use of a heterogeneous team of robots to be commanded on various levels of autonomy. This work presents an intuitive and versatile command concept for such robot teams using a multi-modal Robot Command Terminal (RCT) on board a crewed vessel. We employ an object-centered prior knowledge management that stores the information on how to deal with objects around the robot. This includes knowledge on detecting, reasoning on, and interacting with the objects. The latter is organized in the form of Action Templates (ATs), which allow for hybrid planning of a task, i.e. reasoning on the symbolic and the geometric level to verify the feasibility and find a suitable parameterization of the involved actions. Furthermore, by also treating the robots as objects, robot-specific skillsets can easily be integrated by embedding the skills in ATs. A Multi-Robot World State Representation (MRWSR) is used to instantiate actual objects and their properties. The decentralized synchronization of the MRWSR of multiple robots supports task execution when communication between all participants cannot be guaranteed. To account for robot-specific perception properties, information is stored independently for each robot, and shared among all participants. This enables continuous robot- and command-specific decision on which information to use to accomplish a task. A Mission Control instance allows to tune the available command possibilities to account for specific users, robots, or scenarios. The operator uses an RCT to command robots based on the object-based knowledge representation, whereas the MRWSR serves as a robot-agnostic interface to the planetary assets. The selection of a robot to be commanded serves as top-level filter for the available commands. A second filter layer is applied by selecting an object instance. These filters reduce the multitude of available commands to an amount that is meaningful and handleable for the operator. Robot-specific direct teleoperation skills are accessible via their respective AT, and can be mapped dynamically to available input devices. Using AT-specific parameters provided by the robot for each input device allows a robot-agnostic usage, as well as different control modes e.g. velocity, model-mediated, or domain-based passivity control based on the current communication characteristics. The concept will be evaluated on board the ISS within the Surface Avatar experiments

    Introduction to Surface Avatar: the First Heterogeneous Robotic Team to be Commanded with Scalable Autonomy from the ISS

    Get PDF
    Robotics is vital to the continued development toward Lunar and Martian exploration, in-situ resource utilization, and surface infrastructure construction. Large-scale extra-terrestrial missions will require teams of robots with different, complementary capabilities, together with a powerful, intuitive user interface for effective commanding. We introduce Surface Avatar, the newest ISS-to-Earth telerobotic experiment series, to be conducted in 2022-2024. Spearheaded by DLR, together with ESA, Surface Avatar builds on expertise on commanding robots with different levels of autonomy from our past telerobotic experiments: Kontur-2, Haptics, Interact, SUPVIS Justin, and Analog-1. A team of four heterogeneous robots in a multi-site analog environment at DLR are at the command of a crew member on the ISS. The team has a humanoid robot for dexterous object handling, construction and maintenance; a rover for long traverses and sample acquisition; a quadrupedal robot for scouting and exploring difficult terrains; and a lander with robotic arm for component delivery and sample stowage. The crew's command terminal is multimodal, with an intuitive graphical user interface, 3-DOF joystick, and 7-DOF input device with force-feedback. The autonomy of any robot can be scaled up and down depending on the task and the astronaut's preference: acting as an avatar of the crew in haptically-coupled telepresence, or receiving task-level commands like an intelligent co-worker. Through crew performing collaborative tasks in exploration and construction scenarios, we hope to gain insight into how to optimally command robots in a future space mission. This paper presents findings from the first preliminary session in June 2022, and discusses the way forward in the planned experiment sessions

    Multimodal State Observation and Hierarchical Failure Handling for Reactive Task and Motion Planning in Robotics

    No full text
    In the future, robots are expected to assist in various domains, ranging from care-giving to planetary exploration. Integrated task and motion planning enables the robot to schedule so- phisticated action sequences to solve a certain task. However, the slightest deviation from the planned sequence may result in execution failure. To become truly autonomous, robots have to acknowledge feedback and recover accordingly. In this thesis we propose a novel formalism to observe the current world state during execution and detect deviations in order to react to uncertainties to prevent failure cases. A hierarchical failure handling system manages reactions on different levels. We introduce Reaction Templates (RTs) to dene generic rule-based activation of context-specic reaction patterns. This allows to use classical deterministic approaches for fast planning and reaction mechanisms during runtime to cope with uncertainties. We integrate feedback in two ways. General assumptions that were made implicitly during action denition are monitored at runtime. In addition we offer the possibility to explicitly specify reactive behavior in the action denition. We show two tasks on the humanoid robot Rollin' Justin. In the task of pouring a drink we monitor the implicit assumption that the bottle does not move in the robots hand after picking it up and react if the deviation is too high. This enables the robot to cope with deviations of the bottle position in its hand, which would have led to spilling without our method. The second task is the disconnection of a probe to a unit. In this example we explicitly specify that the robot shall pull the plug until it detects that it has been disconnected successfully. With this feedback, the robot shortens the disconnection time by 58% on average and is able to detect if the disconnection was unsuccessful, preventing it form damaging its hand in subsequent movements
    corecore