2 research outputs found

    On Realizing Multi-Robot Command through Extending the Knowledge Driven Teleoperation Approach

    Get PDF
    Future crewed planetary missions will strongly depend on the support of crew-assistance robots for setup and inspection of critical assets, such as return vehicles, before and after crew arrival. To efficiently accomplish a high variety of tasks, we envision the use of a heterogeneous team of robots to be commanded on various levels of autonomy. This work presents an intuitive and versatile command concept for such robot teams using a multi-modal Robot Command Terminal (RCT) on board a crewed vessel. We employ an object-centered prior knowledge management that stores the information on how to deal with objects around the robot. This includes knowledge on detecting, reasoning on, and interacting with the objects. The latter is organized in the form of Action Templates (ATs), which allow for hybrid planning of a task, i.e. reasoning on the symbolic and the geometric level to verify the feasibility and find a suitable parameterization of the involved actions. Furthermore, by also treating the robots as objects, robot-specific skillsets can easily be integrated by embedding the skills in ATs. A Multi-Robot World State Representation (MRWSR) is used to instantiate actual objects and their properties. The decentralized synchronization of the MRWSR of multiple robots supports task execution when communication between all participants cannot be guaranteed. To account for robot-specific perception properties, information is stored independently for each robot, and shared among all participants. This enables continuous robot- and command-specific decision on which information to use to accomplish a task. A Mission Control instance allows to tune the available command possibilities to account for specific users, robots, or scenarios. The operator uses an RCT to command robots based on the object-based knowledge representation, whereas the MRWSR serves as a robot-agnostic interface to the planetary assets. The selection of a robot to be commanded serves as top-level filter for the available commands. A second filter layer is applied by selecting an object instance. These filters reduce the multitude of available commands to an amount that is meaningful and handleable for the operator. Robot-specific direct teleoperation skills are accessible via their respective AT, and can be mapped dynamically to available input devices. Using AT-specific parameters provided by the robot for each input device allows a robot-agnostic usage, as well as different control modes e.g. velocity, model-mediated, or domain-based passivity control based on the current communication characteristics. The concept will be evaluated on board the ISS within the Surface Avatar experiments

    Introduction to Surface Avatar: the First Heterogeneous Robotic Team to be Commanded with Scalable Autonomy from the ISS

    Get PDF
    Robotics is vital to the continued development toward Lunar and Martian exploration, in-situ resource utilization, and surface infrastructure construction. Large-scale extra-terrestrial missions will require teams of robots with different, complementary capabilities, together with a powerful, intuitive user interface for effective commanding. We introduce Surface Avatar, the newest ISS-to-Earth telerobotic experiment series, to be conducted in 2022-2024. Spearheaded by DLR, together with ESA, Surface Avatar builds on expertise on commanding robots with different levels of autonomy from our past telerobotic experiments: Kontur-2, Haptics, Interact, SUPVIS Justin, and Analog-1. A team of four heterogeneous robots in a multi-site analog environment at DLR are at the command of a crew member on the ISS. The team has a humanoid robot for dexterous object handling, construction and maintenance; a rover for long traverses and sample acquisition; a quadrupedal robot for scouting and exploring difficult terrains; and a lander with robotic arm for component delivery and sample stowage. The crew's command terminal is multimodal, with an intuitive graphical user interface, 3-DOF joystick, and 7-DOF input device with force-feedback. The autonomy of any robot can be scaled up and down depending on the task and the astronaut's preference: acting as an avatar of the crew in haptically-coupled telepresence, or receiving task-level commands like an intelligent co-worker. Through crew performing collaborative tasks in exploration and construction scenarios, we hope to gain insight into how to optimally command robots in a future space mission. This paper presents findings from the first preliminary session in June 2022, and discusses the way forward in the planned experiment sessions
    corecore