1,010 research outputs found

    Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation

    Get PDF
    To become helpful assistants in our daily lives, robots must be able to understand the effects of their actions on their environment. A modern approach to this is the use of a physics simulation, where often very general simulation engines are utilized. As a result, specific modeling features, such as multi-contact simulation or fluid dynamics, may not be well represented. To improve the representativeness of simulations, we propose a framework for combining estimations of multiple heterogeneous simulations into a single one. The framework couples multiple simulations and reorganizes them based on semantically annotated action sequence information. While each object in the scene is always covered by a simulation, this simulation responsibility can be reassigned on-line. In this paper, we introduce the concept of the framework, describe the architecture, and demonstrate two example implementations. Eventually, we demonstrate how the framework can be used to simulate action executions on the humanoid robot Rollin' Justin with the goal to extract the semantic state and how this information is used to assess whether an action sequence is executed successful or not

    Probabilistic Effect Prediction through Semantic Augmentation and Physical Simulation

    Get PDF
    Nowadays, robots are mechanically able to perform highly demanding tasks, where AI-based planning methods are used to schedule a sequence of actions that result in the desired effect. However, it is not always possible to know the exact outcome of an action in advance, as failure situations may occur at any time. To enhance failure tolerance, we propose to predict the effects of robot actions by augmenting collected experience with semantic knowledge and leveraging realistic physics simulations. That is, we consider semantic similarity of actions in order to predict outcome probabilities for previously unknown tasks. Furthermore, physical simulation is used to gather simulated experience that makes the approach robust even in extreme cases. We show how this concept is used to predict action success probabilities and how this information can be exploited throughout future planning trials. The concept is evaluated in a series of real world experiments conducted with the humanoid robot Rollin’ Justin

    Explainability and Knowledge Representation in Robotics: The Green Button Challenge

    Get PDF
    As robots get closer to human environments, a fundamental task for the community is to design system behaviors that foster trust. In this context, we have posed the "Green Button Challenge": every robot should have a green button that, when pressed, makes the robot explain what it is doing and why, in natural language. In this paper, we motivate why explainability is important in robotics, an why explicit knowledge representations are essential to achieving it. We highlight this with a concrete proof-of-concept implementation on our humanoid space assistant Rollin' Justin, which interprets its PDDL plans to explain what it is doing and why

    Autonomous Robot Planning System for In-Space Assembly of Reconfigurable Structures

    Get PDF
    Large-scale space structures, such as telescopes or spacecrafts, require suitable in-situ assembly technologies in order to overcome the limitations on payload size and mass of current launch vehicles. In many application scenarios, manual assembly by astronauts is either highly cost-inefficient or not feasible at all due to orbital constraints. However, (semi-) autonomous robotic assembly systems may provide the means to construct larger structures in space in the near future. Modularity is a key concept for such structures, and also for reducing costs in novel spacecraft designs. The advantage of the modular approach lies in the capability to generate a high number of unique assets from a reduced number of building blocks. Thus, spacecrafts can be easily adapted to particular use cases, and could even be reconfigured during their lifetime using a robotic manipulation system. These ideas lie at the core of our current EU project MOSAR (MOdular Spacecraft Assembly and Reconfiguration). Teleoperating a space robotic system from Earth to assemble aa modular structure is not straightforward. Major difficulties are related to time delays, communication losses, limited control modalities, and low immersion for the operator. Autonomous robotic operations are then preferred, and with this goal we propose aa fully autonomous system for planning in-space assembly tasks. Our system is able to generate assembly and reconfiguration plans for modular structures in terms of high-level actions that can autonomously be executed by aa robot. Through multiple simulation layers, the system automatically verifies the feasibility and correctness of action sequences created by the planner. The layers implement different levels of abstraction, hierarchically stacked to detect infeasible transitions and initiate replanning at an early stage. Levels of abstraction increase in complexity, ranging from a basic geometric description of the spacecraft, over kinematics of the robotic setup, to full representations of the actions. The system reuses information from failed checks in all layers to avoid similar situations during replanning. We use a hybrid approach where symbolic reasoning is combined with considerations of physical constraints to generate a holistic sequence of actions. We demonstrate our planner for large space structures in a simulation environment. In particular, we consider the reconfiguration of a given modular structure, i.e. disassemble parts and reassemble them in a new configuration. The adaptability of our planning system is shown by executing the assembly plans on robots with different sets of skills and in scenarios with simulated hardware failures

    Audio Perception in Robotic Assistance for Human Space Exploration: A Feasibility Study

    Get PDF
    Future crewed missions beyond low earth orbit will greatly rely on the support of robotic assistance platforms to perform inspection and manipulation of critical assets. This includes crew habitats, landing sites or assets for life support and operation. Maintenance and manipulation of a crewed site in extra-terrestrial environments is a complex task and the system will have to face different challenges during operation. While most may be solved autonomously, in certain occasions human intervention will be required. The telerobotic demonstration mission, Surface Avatar, led by the German Aerospace Center (DLR), with partner European Space Agency (ESA), investigates different approaches offering astronauts on board the International Space Station (ISS) control of ground robots in representative scenarios, e.g. a Martian landing and exploration site. In this work we present a feasibility study on how to integrate auditory information into the mentioned application. We will discuss methods for obtaining audio information and localizing audio sources in the environment, as well as fusing auditory and visual information to perform state estimation based on the gathered data. We demonstrate our work in different experiments to show the effectiveness of utilizing audio information, the results of spectral analysis of our mission assets, and how this information could help future astronauts to argue about the current mission situation

    Extending the Knowledge Driven Approach for Scalable Autonomy Teleoperation of a Robotic Avatar

    Get PDF
    Crewed missions to celestial bodies such as Moon and Mars are in the focus of an increasing number of space agencies. Precautions to ensure a safe landing of the crew on the extraterrestrial surface, as well as reliable infrastructure on the remote location, for bringing the crew back home are key considerations for mission planning. The European Space Agency (ESA) identified in its Terrae Novae 2030+ roadmap, that robots are needed as precursors and scouts to ensure the success of such missions. An important role these robots will play, is the support of the astronaut crew in orbit to carry out scientific work, and ultimately ensuring nominal operation of the support infrastructure for astronauts on the surface. The METERON SUPVIS Justin ISS experiments demonstrated that supervised autonomy robot command can be used for executing inspection, maintenance and installation tasks using a robotic co-worker on the planetary surface. The knowledge driven approach utilized in the experiments only reached its limits when situations arise that were not anticipated by the mission design. In deep space scenarios, the astronauts must be able to overcome these limitations. An approach towards more direct command of a robot was demonstrated in the METERON ANALOG-1 ISS experiment. In this technical demonstration, an astronaut used haptic telepresence to command a robotic avatar on the surface to execute sampling tasks. In this work, we propose a system that combines supervised autonomy and telepresence by extending the knowledge driven approach. The knowledge management is based on organizing the prior knowledge of the robot in an object-centered context. Action Templates are used to define the knowledge on the handling of the objects on a symbolic and geometric level. This robot-agnostic system can be used for supervisory command of any robotic coworker. By integrating the robot itself as an object into the object-centered domain, robot-specific skills and (tele-)operation modes can be injected into the existing knowledge management system by formulating respective Action Templates. In order to efficiently use advanced teleoperation modes, such as haptic telepresence, a variety of input devices are integrated into the proposed system. This work shows how the integration of these devices is realized in a way that is agnostic to the input devices and operation modes. The proposed system is evaluated in the Surface Avatar ISS experiment. This work shows how the system is integrated into a Robot Command Terminal featuring a 3-Degree-of-Freedom Joystick and a 7-Degree-of-Freedom haptic input device in the Columbus module of the ISS. In the preliminary experiment sessions of Surface Avatar, two astronauts on orbit took command of the humanoid service robot Rollin' Justin in Germany. This work presents and discusses the results of these ISS-to-ground sessions and derives requirements for extending the scalable autonomy system for the use with a heterogeneous robotic team

    On Realizing Multi-Robot Command through Extending the Knowledge Driven Teleoperation Approach

    Get PDF
    Future crewed planetary missions will strongly depend on the support of crew-assistance robots for setup and inspection of critical assets, such as return vehicles, before and after crew arrival. To efficiently accomplish a high variety of tasks, we envision the use of a heterogeneous team of robots to be commanded on various levels of autonomy. This work presents an intuitive and versatile command concept for such robot teams using a multi-modal Robot Command Terminal (RCT) on board a crewed vessel. We employ an object-centered prior knowledge management that stores the information on how to deal with objects around the robot. This includes knowledge on detecting, reasoning on, and interacting with the objects. The latter is organized in the form of Action Templates (ATs), which allow for hybrid planning of a task, i.e. reasoning on the symbolic and the geometric level to verify the feasibility and find a suitable parameterization of the involved actions. Furthermore, by also treating the robots as objects, robot-specific skillsets can easily be integrated by embedding the skills in ATs. A Multi-Robot World State Representation (MRWSR) is used to instantiate actual objects and their properties. The decentralized synchronization of the MRWSR of multiple robots supports task execution when communication between all participants cannot be guaranteed. To account for robot-specific perception properties, information is stored independently for each robot, and shared among all participants. This enables continuous robot- and command-specific decision on which information to use to accomplish a task. A Mission Control instance allows to tune the available command possibilities to account for specific users, robots, or scenarios. The operator uses an RCT to command robots based on the object-based knowledge representation, whereas the MRWSR serves as a robot-agnostic interface to the planetary assets. The selection of a robot to be commanded serves as top-level filter for the available commands. A second filter layer is applied by selecting an object instance. These filters reduce the multitude of available commands to an amount that is meaningful and handleable for the operator. Robot-specific direct teleoperation skills are accessible via their respective AT, and can be mapped dynamically to available input devices. Using AT-specific parameters provided by the robot for each input device allows a robot-agnostic usage, as well as different control modes e.g. velocity, model-mediated, or domain-based passivity control based on the current communication characteristics. The concept will be evaluated on board the ISS within the Surface Avatar experiments

    The state of the Martian climate

    Get PDF
    60°N was +2.0°C, relative to the 1981–2010 average value (Fig. 5.1). This marks a new high for the record. The average annual surface air temperature (SAT) anomaly for 2016 for land stations north of starting in 1900, and is a significant increase over the previous highest value of +1.2°C, which was observed in 2007, 2011, and 2015. Average global annual temperatures also showed record values in 2015 and 2016. Currently, the Arctic is warming at more than twice the rate of lower latitudes

    Interpreting EEG alpha activity

    Get PDF
    Exploring EEG alpha oscillations has generated considerable interest, in particular with regards to the role they play in cognitive, psychomotor, psycho-emotional and physiological aspects of human life. However, there is no clearly agreed upon definition of what constitutes ‘alpha activity’ or which of the many indices should be used to characterize it. To address these issues this review attempts to delineate EEG alpha-activity, its physical, molecular and morphological nature, and examine the following indices: (1) the individual alpha peak frequency; (2) activation magnitude, as measured by alpha amplitude suppression across the individual alpha bandwidth in response to eyes opening, and (3) alpha “auto-rhythmicity” indices: which include intra-spindle amplitude variability, spindle length and steepness. Throughout, the article offers a number of suggestions regarding the mechanism(s) of alpha activity related to inter and intra-individual variability. In addition, it provides some insights into the various psychophysiological indices of alpha activity and highlights their role in optimal functioning and behavior

    Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist)

    Get PDF
    Neurofeedback has begun to attract the attention and scrutiny of the scientific and medical mainstream. Here, neurofeedback researchers present a consensus-derived checklist that aims to improve the reporting and experimental design standards in the field.</p
    corecore