11 research outputs found

    A Newcomer's Guide to the Challenges of a Complex Space-to-Ground Experiment, With Lessons from Analog-1

    Get PDF
    An astronaut controlling a complex robot on the surface of earth from the ISS. This is exactly what we have done in ANALOG-1. Luca Parmitano teleoperated a rover in a moon-analogue geological mission scenario. On first sight the primary technical challenges seem to be the design of the robotic systems for space and ground. On a second look - with the perspective of using the system with an astronaut on the ISS in loop with an operations team in different ground centers - the scope and challenges drastically increase. In this paper we take a look behind the scenes, and gives insights which could guide future payload developers going on a similar endeavour. This paper outlines the Analog-1 experiment, itself, what it aimed to achieve, and how it was done, and uses it as a case study to outline the challenges and solutions a project team and particularly the payload developer - will have to overcome when designing an ISS experiment. This article may be especially insightful and a good starting point for those from a small research team at a university or other research institution with budget and time pressure. We will present it from the payload developers perspective and on concrete examples of the payloads we flown

    On Realizing Multi-Robot Command through Extending the Knowledge Driven Teleoperation Approach

    Get PDF
    Future crewed planetary missions will strongly depend on the support of crew-assistance robots for setup and inspection of critical assets, such as return vehicles, before and after crew arrival. To efficiently accomplish a high variety of tasks, we envision the use of a heterogeneous team of robots to be commanded on various levels of autonomy. This work presents an intuitive and versatile command concept for such robot teams using a multi-modal Robot Command Terminal (RCT) on board a crewed vessel. We employ an object-centered prior knowledge management that stores the information on how to deal with objects around the robot. This includes knowledge on detecting, reasoning on, and interacting with the objects. The latter is organized in the form of Action Templates (ATs), which allow for hybrid planning of a task, i.e. reasoning on the symbolic and the geometric level to verify the feasibility and find a suitable parameterization of the involved actions. Furthermore, by also treating the robots as objects, robot-specific skillsets can easily be integrated by embedding the skills in ATs. A Multi-Robot World State Representation (MRWSR) is used to instantiate actual objects and their properties. The decentralized synchronization of the MRWSR of multiple robots supports task execution when communication between all participants cannot be guaranteed. To account for robot-specific perception properties, information is stored independently for each robot, and shared among all participants. This enables continuous robot- and command-specific decision on which information to use to accomplish a task. A Mission Control instance allows to tune the available command possibilities to account for specific users, robots, or scenarios. The operator uses an RCT to command robots based on the object-based knowledge representation, whereas the MRWSR serves as a robot-agnostic interface to the planetary assets. The selection of a robot to be commanded serves as top-level filter for the available commands. A second filter layer is applied by selecting an object instance. These filters reduce the multitude of available commands to an amount that is meaningful and handleable for the operator. Robot-specific direct teleoperation skills are accessible via their respective AT, and can be mapped dynamically to available input devices. Using AT-specific parameters provided by the robot for each input device allows a robot-agnostic usage, as well as different control modes e.g. velocity, model-mediated, or domain-based passivity control based on the current communication characteristics. The concept will be evaluated on board the ISS within the Surface Avatar experiments

    ANALOG-1 ISS - The first part of an analogue mission to guide ESA's robotic moon exploration efforts

    Get PDF
    The METERON project is a European initiative to prepare for future human-robotic exploration missions to the Moon, Mars and other celestial bodies. The project aims to implement infrastructure and tools to test and evaluate communications, operations and robotic control strategies in the context of future exploration missions. It is in collaboration between three directorates of the European Space Agency (ESA); Human and Robotic Exploration (HRE), Technology, Engineering and Quality (TEC), Operations (OPS). This paper presents the first part of the on-going ANALOG-1 experiment which is the culmination of the METERON project, implementing the knowledge gained in the 12 distinct METERON experiments between 2011 and 2020. These all address aspects of teleoperating a robotic asset from an orbital platform, i.e. technical implementation, user interfaces, autonomy and operations. The ANALOG-1 technology demonstration and operations concept experiment is based upon the surface mission scenario segment of the notional EL3 sample return mission. This segment focuses on the control of a lunar surface robotic asset from the Earth and from the Lunar Gateway. In November 2019, the first part of this experiment was successfully completed from the ISS. It assessed the effectiveness of a state-of-the-art robotic control interface to control a complex mobile robot from orbit, as well as evaluating the scientific interactions, during robotic-assisted geology exploration, between crew in orbit and scientists on the ground. Luca Parmitano drove this robot in a lunar analogue site in the Netherlands, and controlled its arms, while he was on the ISS. For this experiment, a complex control station had been installed on the ISS, including a sigma.7 haptic device. This device allowed the astronaut to feel forces felt by the robotic arm. The experiment demonstrated the advantage of having an immersive control station and high level of robotic dexterity, with Luca finishing all his assigned and secondary geology targets ahead of time. The second part of Analog-1 extends the ISS experiment with a full ground-based analogue, in which further technical experiments and a full mission scenario will be played out. The analogue is in cooperation with the DLR ARCHES space demo mission, and includes a rover operations centre based at ESOC as well as an outdoor lunar analogue site on Mount Etna. The astronaut, in this case, is on ground. We expect to further demonstrate the advantages of a state-of-the art interface for both fully teleoperated and semi-autonomous rover and robotic arm control for lunar missions, in order to guide ESA's Moon exploration efforts

    METERON Analog-1: A Touch Remote

    Get PDF
    The METERON project (Multipurpose End-To-End Robotics Operations Network) was implemented by the European Space Agency as an initiative to prepare Europe for future humanrobotic exploration scenarios that in particular, focused on examination of the human-robotic partnership, and how this partnership could be optimized through an evaluation of the tools and methodologies utilized in the experiments in the domains of operations, communications and robotics (specifically with respect to control strategies)

    Introduction to Surface Avatar: the First Heterogeneous Robotic Team to be Commanded with Scalable Autonomy from the ISS

    Get PDF
    Robotics is vital to the continued development toward Lunar and Martian exploration, in-situ resource utilization, and surface infrastructure construction. Large-scale extra-terrestrial missions will require teams of robots with different, complementary capabilities, together with a powerful, intuitive user interface for effective commanding. We introduce Surface Avatar, the newest ISS-to-Earth telerobotic experiment series, to be conducted in 2022-2024. Spearheaded by DLR, together with ESA, Surface Avatar builds on expertise on commanding robots with different levels of autonomy from our past telerobotic experiments: Kontur-2, Haptics, Interact, SUPVIS Justin, and Analog-1. A team of four heterogeneous robots in a multi-site analog environment at DLR are at the command of a crew member on the ISS. The team has a humanoid robot for dexterous object handling, construction and maintenance; a rover for long traverses and sample acquisition; a quadrupedal robot for scouting and exploring difficult terrains; and a lander with robotic arm for component delivery and sample stowage. The crew's command terminal is multimodal, with an intuitive graphical user interface, 3-DOF joystick, and 7-DOF input device with force-feedback. The autonomy of any robot can be scaled up and down depending on the task and the astronaut's preference: acting as an avatar of the crew in haptically-coupled telepresence, or receiving task-level commands like an intelligent co-worker. Through crew performing collaborative tasks in exploration and construction scenarios, we hope to gain insight into how to optimally command robots in a future space mission. This paper presents findings from the first preliminary session in June 2022, and discusses the way forward in the planned experiment sessions

    Toward Multi User Knowledge Driven Teleoperation of a Robotic Team with Scalable Autonomy

    Get PDF
    This paper proposes a knowledge-driven teleoperation framework that enables multiple operators to command a team of robots to execute complex tasks in an efficient and intuitive manner. The framework leverages a shared knowledge base that captures domain-specific information and procedural knowledge about the task at hand. This knowledge base is used by a hybrid planner to generate context-specifically relevant commands for supervised autonomy robot command as well as direct teleoperation modes. By filtering the available commands, the operators are guided in their decision-making towards efficient task completion. This paper further extends our knowledge driven approach to address the switching between multiple operators and robotic assets, with the aim to be able scale up human-robot team for space exploration. Overall, this work represents a step towards more intelligent and collaborative teleoperation systems. The described system will be used in the Surface Avatar ISS-to-ground experiments slated for July 2023

    Designing and Testing a Robotic Avatar for Space-to-Ground Teleoperation: the Developers' Insights

    Get PDF
    In late 2019, astronaut Luca Parmitano remotely controlled a rover equipped with a robotic manipulator, performing geology tasks on a moon-analog site from the ISS. One year and 7 months later, in July 2021, he will control the same rover in a more realistic moon-analog environment: a field of volcanic rock and regolith on mount Etna, Italy. These experiments constitute the Analog-1 campaign in the frame of ESA's METERON project. As payload developers, we want to create an interface for astronauts to intuitively operate robotic systems on a planetary or lunar surface: how can we maximise task efficiency and sense of immersion/transparency? At the same time, how can we minimise operator fatigue, and physical and mental effort? And how do we do this while constrained in the framework of human spaceflight, with upmass and software requirements, with delayed, low-bandwidth and unreliable communications? We show how we created a telerobotic system featuring an intuitive graphical and haptic user interface. This included a force feedback device and custom joystick, controlling a mobile robotic platform. The robotic platform consisted of an all-terrain chassis and two 7-DOF robotic arms with torque sensing. One arm was mounted on the front of the rover and used for manipulation; the other was mounted on top and used to reposition a camera. With this system, the astronaut was fully in control of the robot to collect rock samples. The only external input was from a ground team of scientists over voice-loop and text-messenger, concerning the choice of geological samples. Full, stable 6-DOF force feedback for the manipulation arm was provided via a sigma.7 haptic input device. This meant that the astronaut could feel (for the first time from space) not only full-DOF contact with the planet surface from orbit, but also the weight of the rocks they grasped. System status feedback was visually and intuitively presented on the user interface - running on a laptop on board the ISS - as well as views from two cameras. During development we continuously integrated requirements from various stakeholders and feedback from astronauts and astronaut trainers to improve the user interface. The analog tests delivered valuable insights about how to design a telepresence system to control robots on a planet's surface from orbit. We expect these insights to be useful for future development of teleoperated planetary robotics as well as terrestrial applications in similar scenarios

    Exploring planet geology through force-feedback telemanipulation from orbit

    No full text
    Current space exploration roadmaps envision exploring the surface geology of celestial bodies with robots, for both scientific research and in-situ resource utilization. In such unstructured, poorly lit, complex and remote environments, automation is not always possible, and some tasks, such as geological sampling, require direct tele-operation aided by force-feedback. The operator would be on an orbiting spacecraft, and poor bandwidth and high latency and packet loss from orbit to ground mean that safe, stable, and transparent interaction is a substantial technical challenge. For this scenario, a control method was developed which ensures stability at high delay without reduction in speed or loss of positioning accuracy. At the same time, a new level of safety is achieved not only through force-feedback itself but also through 1an intrinsic property of the approach preventing hard impacts. Based on this method, a tele-exploration scenario was simulated in the Analog-1 experiment with an astronaut on the International Space Station (ISS) using a 6-degreeof-freedom (DoF) force-feedback (F-F) capable haptic input device to control a mobile robot with manipulator on Earth to collect rock samples. The 6-DoF FF tele-manipulation from space was performed at a round-trip communication delay constantly between 770 and 850 milliseconds and an average packet loss of 1.27%. This experiment showcases the feasibility of a complete space exploration scenario via haptic tele-manipulation under space-flight conditions. The results underline the benefits of this control method for safe and accurate interactions, and of haptic feedback in general
    corecore