13 research outputs found

    Introduction to Surface Avatar: the First Heterogeneous Robotic Team to be Commanded with Scalable Autonomy from the ISS

    Get PDF
    Robotics is vital to the continued development toward Lunar and Martian exploration, in-situ resource utilization, and surface infrastructure construction. Large-scale extra-terrestrial missions will require teams of robots with different, complementary capabilities, together with a powerful, intuitive user interface for effective commanding. We introduce Surface Avatar, the newest ISS-to-Earth telerobotic experiment series, to be conducted in 2022-2024. Spearheaded by DLR, together with ESA, Surface Avatar builds on expertise on commanding robots with different levels of autonomy from our past telerobotic experiments: Kontur-2, Haptics, Interact, SUPVIS Justin, and Analog-1. A team of four heterogeneous robots in a multi-site analog environment at DLR are at the command of a crew member on the ISS. The team has a humanoid robot for dexterous object handling, construction and maintenance; a rover for long traverses and sample acquisition; a quadrupedal robot for scouting and exploring difficult terrains; and a lander with robotic arm for component delivery and sample stowage. The crew's command terminal is multimodal, with an intuitive graphical user interface, 3-DOF joystick, and 7-DOF input device with force-feedback. The autonomy of any robot can be scaled up and down depending on the task and the astronaut's preference: acting as an avatar of the crew in haptically-coupled telepresence, or receiving task-level commands like an intelligent co-worker. Through crew performing collaborative tasks in exploration and construction scenarios, we hope to gain insight into how to optimally command robots in a future space mission. This paper presents findings from the first preliminary session in June 2022, and discusses the way forward in the planned experiment sessions

    VibroTac: An Ergonomic And Versatile Usable Vibrotactile Feedback Device

    Get PDF
    Abstract— This paper presents an ergonomic vibrotactile feedback device for the human arm. Due to the developed con- cept, the device can be used for a large spectrum of applications and a wide range of arm diameters since vibration segments are self-aligning to their intended positions. Furthermore, the device improves user convenience and movement capability as it is battery powered and controlled through a wireless communication interface. Vibrotactile stimuli are used to give collision feedback or guidance information to the human arm when interacting with a Virtual Reality scenario. The usefulness of this device has been shown in a Virtual Reality automotive assembly verification and a telerobotic system

    DLR VR-SCAN: A versatile and robust miniaturized laser scanner for short range 3D-modelling and exploration in robotics

    Get PDF
    Precise and robust perception of the environment is crucial for highly integrated and autonomous robot systems. In this paper the dedicated design of a triangulation based laser range scanner optimized for 3D-modelling and autonomous exploration in robotics is presented. The presented laser scanner design is based on an extremely small MEMS scan head permitting a compact, lightweight and highly integrated implementation allowing for hand-eye operation. Special capabilities like variable range and confidence rating of the measuring values increase robustness. The design considerations and a prototype are described and experimental results are presented

    Exploiting potential energy storage for cyclic manipulation: A Human-Centered Approach to Robot Gesture Based Communication within Collaborative Working Processes

    Get PDF
    The increasing ability of industrial robots to perform complex tasks in collaboration with humans requires more capable ways of communication and interaction. Traditional systems use separate interfaces such as touchscreens or control panels in order to operate the robot, or to communicate its state and prospective actions to the user. Transferring human communication, such as gestures to technical non-humanoid robots, creates various opportunities for more intuitive humanrobot- interaction. Interaction shall no longer require a separate interface such as a control panel. Instead, it should take place directly between human and robot. To explore intuitive interaction, we identified gestures that are relevant for co-working tasks from human observations. Based on a decomposition approach we transferred them to robotic systems of increasing abstraction and experimentally evaluated how well these gestures are recognized by humans. We created a humanrobot interaction use-case in order to perform the task of handling dangerous liquid. Results indicate that several gestures are well perceived when displayed with context information regarding the task

    Integration von Produktdesign in Entwicklungsprozesse der angewandten Forschung

    No full text
    Am Institut für Robotik und Mechatronik des Deutschen Zentrums für Luft- und Raumfahrt (DLR) wird ein interner Produktdesigner eingesetzt, um die Exzellenz von Forschung und Entwicklung hervorzuheben und Transferpotentiale von Forschungsprojekten besser zu erkennen, zu bewerten und umzusetzen. Anhand von drei Projekten werden die Arbeitsmethoden und die Art der Integration des Designers in den Bereichen Konzeptentwicklung, Entwurf und Konstruktion sowie im Bereich Designmanagement dargestellt

    Robot Integrated User Interface for Physical Interaction with the DLR MIRO in Versatile Medical Procedures

    Get PDF
    To enhance the capability of the DLR MIRO for physical human robot interaction (pHRI), six buttons were integrated as additional input interface along the robot structure. A ring of eight RGB-LEDs at the instrument interface informs the user as additional output interface about the robot's state. The mechatronic design, which is transferable to other robots, adapts to the existing communication infrastructure of the robot and therefore offers real-time capability. Besides the interaction with the robot itself, it also allows the control of third party devices connected to its communication network. Both interfaces can be flexibly programmed e.g. in C++ or Simulink

    Portable 3-D Modeling using Visual Pose Tracking

    Get PDF
    This work deals with the passive tracking of the pose of a close-range 3-D modeling device using its own high-rate images in realtime, concurrently with customary 3-D modeling of the scene. This novel development makes it possible to abandon using inconvenient, expensive external trackers, achieving a portable and inexpensive solution. The approach comprises efficient tracking of natural features following the Active Matching paradigm, a frugal use of interleaved feature-based stereo triangulation, visual odometry using the robustified V-GPS algorithm, graph optimization by local bundle adjustment, appearance-based relocalization using a bank of parallel three-point-perspective pose solvers on SURF features, and online reconstruction of the scene in the form of textured triangle meshes to provide visual feedback to the user. Ideally, objects are completely digitized by browsing around the scene; in the event of closing the motion loop, a hybrid graph optimization takes place, which delivers highly accurate motion history to refine the whole 3-D model within a second. The method has been implemented on the DLR 3D-Modeler; demonstrations and abundant video material validate the approach. These types of low-cost systems have the potential to enhance traditional 3-D modeling and conquer new markets owing to their mobility, passivity, and accuracy

    Annotated Bibliography: Grande Bretagne / Great Britain: Part Four: The Literature on Parliament

    Get PDF
    The increasing ability of industrial robots to perform complex tasks in collaboration with humans requires more capable ways of communication and interaction. Traditional systems use separate interfaces such as touchscreens or control panels in order to operate the robot, or to communicate its state and prospective actions to the user. Transferring human communication, such as gestures to technical non-humanoid robots, creates various opportunities for more intuitive humanrobot-interaction. Interaction shall no longer require a separate interface such as a control panel. Instead, it should take place directly between human and robot. To explore intuitive interaction, we identified gestures that are relevant for co-working tasks from human observations. Based on a decomposition approach we transferred them to robotic systems of increasing abstraction and experimentally evaluated how well these gestures are reco nized by humans. We created a humanrobot interaction use-case in order to perform the task of handling dangerous liquid. Results indicate that several gestures are well perceived when displayed with context information regarding the task
    corecore