498 research outputs found

    Space Science Opportunities Augmented by Exploration Telepresence

    Get PDF
    Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth. Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence. This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites

    Space Science Opportunities Augmented by Exploration Telepresence

    Get PDF
    Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth. Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence. This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites

    Healthcare Robotics

    Full text link
    Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key stakeholders, care settings, and tasks; reviewing recent advances in healthcare robotics; and outlining major challenges and opportunities to their adoption.Comment: 8 pages, Communications of the ACM, 201

    Anthropomorphic Robot Design and User Interaction Associated with Motion

    Get PDF
    Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement

    Multisensory wearable interface for immersion and telepresence in robotics

    Get PDF
    The idea of being present in a remote location has inspired researchers to develop robotic devices that make humans to experience the feeling of telepresence. These devices need of multiple sensory feedback to provide a more realistic telepresence experience. In this work, we develop a wearable interface for immersion and telepresence that provides to human with the capability of both to receive multisensory feedback from vision, touch and audio and to remotely control a robot platform. Multimodal feedback from a remote environment is based on the integration of sensor technologies coupled to the sensory system of the robot platform. Remote control of the robot is achieved by a modularised architecture, which allows to visually explore the remote environment. We validated our work with multiple experiments where participants, located at different venues, were able to successfully control the robot platform while visually exploring, touching and listening a remote environment. In our experiments we used two different robotic platforms: the iCub humanoid robot and the Pioneer LX mobile robot. These experiments show that our wearable interface is comfortable, easy to use and adaptable to different robotic platforms. Furthermore, we observed that our approach allows humans to experience a vivid feeling of being present in a remote environment

    Haptic Guidance for Extended Range Telepresence

    Get PDF
    A novel navigation assistance for extended range telepresence is presented. The haptic information from the target environment is augmented with guidance commands to assist the user in reaching desired goals in the arbitrarily large target environment from the spatially restricted user environment. Furthermore, a semi-mobile haptic interface was developed, one whose lightweight design and setup configuration atop the user provide for an absolutely safe operation and high force display quality

    The Shape of Damping: Optimizing Damping Coefficients to Improve Transparency on Bilateral Telemanipulation

    Get PDF
    This thesis presents a novel optimization-based passivity control algorithm for hapticenabled bilateral teleoperation systems involving multiple degrees of freedom. In particular, in the context of energy-bounding control, the contribution focuses on the implementation of a passivity layer for an existing time-domain scheme, ensuring optimal transparency of the interaction along subsets of the environment space which are preponderant for the given task, while preserving the energy bounds required for passivity. The involved optimization problem is convex and amenable to real-time implementation. The effectiveness of the proposed design is validated via an experiment performed on a virtual teleoperated environment. The interplay between transparency and stability is a critical aspect in haptic-enabled bilateral teleoperation control. While it is important to present the user with the true impedance of the environment, destabilizing factors such as time delays, stiff environments, and a relaxed grasp on the master device may compromise the stability and safety of the system. Passivity has been exploited as one of the the main tools for providing sufficient conditions for stable teleoperation in several controller design approaches, such as the scattering algorithm, timedomain passivity control, energy bounding algorithm, and passive set position modulation. In this work it is presented an innovative energy-based approach, which builds upon existing time-domain passivity controllers, improving and extending their effectiveness and functionality. The set of damping coefficients are prioritized in each degree of freedom, the resulting transparency presents a realistic force feedback in comparison to the other directions. Thus, the prioritization takes effect using a quadratic programming algorithm to find the optimal values for the damping. Finally, the energy tanks approach on passivity control is a solution used to ensure stability in a system for robotics bilateral manipulation. The bilateral telemanipulation must maintain the principle of passivity in all moments to preserve the system\u2019s stability. This work presents a brief introduction to haptic devices as a master component on the telemanipulation chain; the end effector in the slave side is a representation of an interactive object within an environment having a force sensor as feedback signal. The whole interface is designed into a cross-platform framework named ROS, where the user interacts with the system. Experimental results are presented

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image
    • …
    corecore