44 research outputs found

    Context-aware Mission Control for Astronaut-Robot Collaboration

    Get PDF
    Space robot assistants are envisaged as semi-autonomous co-workers deployed to lighten the workload of astronauts in cumbersome and dangerous situations. In view of this, this work considers the prospects on the technology requirements for future space robot operations, by presenting a novel mission control concept for close astronaut-robot collaboration. A decentralized approach is proposed, in which an astronaut is put in charge of commanding the robot, and a mission control center on Earth maintains a list of authorized robot actions by applying symbolic, geometric, and context-specific filters. The concept is applied to actual space robot operations within the METERON SUPVIS Justin experiment. In particular, it is shown how the concept is utilized to guide an astronaut aboard the ISS in its mission to survey and maintain a solar panel farm in a simulated Mars environment

    Building on the Moon Using Additive Manufacturing: Discussion of Robotic Approaches

    Get PDF
    We present two concepts for building structures on the moon. The first involves an all-purpose robot with exchangeable tools, able to excavate regolith, produce slurry and print a structure. The second involves a team of robots: one excavator, one sintering using sunlight, and another gathering the sintered bricks and building a structure. ESA's Space Resources Strategy sets the challenge of achieving human presence at the Moon, sustained by local resources, by 2040, which includes in-situ manufacturing and construction. We take a realistic look at the feasibility of these concepts and their component technologies, and their challenges with respect to the state of the art and expected technological readiness in the next 2-5 years

    Model Mediated Teleoperation with a Hand-Arm Exoskeleton in Long Time Delays Using Reinforcement Learning

    Get PDF
    Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLR's Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved

    Exodex Adam—A Reconfigurable Dexterous Haptic User Interface for the Whole Hand

    Get PDF
    Applications for dexterous robot teleoperation and immersive virtual reality are growing. Haptic user input devices need to allow the user to intuitively command and seamlessly “feel” the environment they work in, whether virtual or a remote site through an avatar. We introduce the DLR Exodex Adam, a reconfigurable, dexterous, whole-hand haptic input device. The device comprises multiple modular, three degrees of freedom (3-DOF) robotic fingers, whose placement on the device can be adjusted to optimize manipulability for different user hand sizes. Additionally, the device is mounted on a 7-DOF robot arm to increase the user’s workspace. Exodex Adam uses a front-facing interface, with robotic fingers coupled to two of the user’s fingertips, the thumb, and two points on the palm. Including the palm, as opposed to only the fingertips as is common in existing devices, enables accurate tracking of the whole hand without additional sensors such as a data glove or motion capture. By providing “whole-hand” interaction with omnidirectional force-feedback at the attachment points, we enable the user to experience the environment with the complete hand instead of only the fingertips, thus realizing deeper immersion. Interaction using Exodex Adam can range from palpation of objects and surfaces to manipulation using both power and precision grasps, all while receiving haptic feedback. This article details the concept and design of the Exodex Adam, as well as use cases where it is deployed with different command modalities. These include mixed-media interaction in a virtual environment, gesture-based telemanipulation, and robotic hand–arm teleoperation using adaptive model-mediated teleoperation. Finally, we share the insights gained during our development process and use case deployments

    Model Mediated Teleoperation with a Hand-Arm Exoskeleton in Long Time Delays Using Reinforcement Learning

    Get PDF
    elerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLR's Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved

    A Newcomer's Guide to the Challenges of a Complex Space-to-Ground Experiment, With Lessons from Analog-1

    Get PDF
    An astronaut controlling a complex robot on the surface of earth from the ISS. This is exactly what we have done in ANALOG-1. Luca Parmitano teleoperated a rover in a moon-analogue geological mission scenario. On first sight the primary technical challenges seem to be the design of the robotic systems for space and ground. On a second look - with the perspective of using the system with an astronaut on the ISS in loop with an operations team in different ground centers - the scope and challenges drastically increase. In this paper we take a look behind the scenes, and gives insights which could guide future payload developers going on a similar endeavour. This paper outlines the Analog-1 experiment, itself, what it aimed to achieve, and how it was done, and uses it as a case study to outline the challenges and solutions a project team and particularly the payload developer - will have to overcome when designing an ISS experiment. This article may be especially insightful and a good starting point for those from a small research team at a university or other research institution with budget and time pressure. We will present it from the payload developers perspective and on concrete examples of the payloads we flown

    Extending the Knowledge Driven Approach for Scalable Autonomy Teleoperation of a Robotic Avatar

    Get PDF
    Crewed missions to celestial bodies such as Moon and Mars are in the focus of an increasing number of space agencies. Precautions to ensure a safe landing of the crew on the extraterrestrial surface, as well as reliable infrastructure on the remote location, for bringing the crew back home are key considerations for mission planning. The European Space Agency (ESA) identified in its Terrae Novae 2030+ roadmap, that robots are needed as precursors and scouts to ensure the success of such missions. An important role these robots will play, is the support of the astronaut crew in orbit to carry out scientific work, and ultimately ensuring nominal operation of the support infrastructure for astronauts on the surface. The METERON SUPVIS Justin ISS experiments demonstrated that supervised autonomy robot command can be used for executing inspection, maintenance and installation tasks using a robotic co-worker on the planetary surface. The knowledge driven approach utilized in the experiments only reached its limits when situations arise that were not anticipated by the mission design. In deep space scenarios, the astronauts must be able to overcome these limitations. An approach towards more direct command of a robot was demonstrated in the METERON ANALOG-1 ISS experiment. In this technical demonstration, an astronaut used haptic telepresence to command a robotic avatar on the surface to execute sampling tasks. In this work, we propose a system that combines supervised autonomy and telepresence by extending the knowledge driven approach. The knowledge management is based on organizing the prior knowledge of the robot in an object-centered context. Action Templates are used to define the knowledge on the handling of the objects on a symbolic and geometric level. This robot-agnostic system can be used for supervisory command of any robotic coworker. By integrating the robot itself as an object into the object-centered domain, robot-specific skills and (tele-)operation modes can be injected into the existing knowledge management system by formulating respective Action Templates. In order to efficiently use advanced teleoperation modes, such as haptic telepresence, a variety of input devices are integrated into the proposed system. This work shows how the integration of these devices is realized in a way that is agnostic to the input devices and operation modes. The proposed system is evaluated in the Surface Avatar ISS experiment. This work shows how the system is integrated into a Robot Command Terminal featuring a 3-Degree-of-Freedom Joystick and a 7-Degree-of-Freedom haptic input device in the Columbus module of the ISS. In the preliminary experiment sessions of Surface Avatar, two astronauts on orbit took command of the humanoid service robot Rollin' Justin in Germany. This work presents and discusses the results of these ISS-to-ground sessions and derives requirements for extending the scalable autonomy system for the use with a heterogeneous robotic team
    corecore