9 research outputs found

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Accelerating Surgical Robotics Research: A Review of 10 Years With the da Vinci Research Kit

    Get PDF
    Robotic-assisted surgery is now well-established in clinical practice and has become the gold standard clinical treatment option for several clinical indications. The field of robotic-assisted surgery is expected to grow substantially in the next decade with a range of new robotic devices emerging to address unmet clinical needs across different specialities. A vibrant surgical robotics research community is pivotal for conceptualizing such new systems as well as for developing and training the engineers and scientists to translate them into practice. The da Vinci Research Kit (dVRK), an academic and industry collaborative effort to re-purpose decommissioned da Vinci surgical systems (Intuitive Surgical Inc, CA, USA) as a research platform for surgical robotics research, has been a key initiative for addressing a barrier to entry for new research groups in surgical robotics. In this paper, we present an extensive review of the publications that have been facilitated by the dVRK over the past decade. We classify research efforts into different categories and outline some of the major challenges and needs for the robotics community to maintain this initiative and build upon it

    Augmented Reality Navigation Interfaces Improve Human Performance In End-Effector Controlled Telerobotics

    Get PDF
    On the International Space Station (ISS) and space shuttles, the National Aeronautics and Space Administration (NASA) has used robotic manipulators extensively to perform payload handling and maintenance tasks. Teleoperating robots require expert skills and optimal performance is crucial to mission completion and crew safety. Degradation in performance is observed when manual control is mediated through remote camera views, resulting in poor end-effector navigation quality and extended task completion times. This thesis explores the application of three-dimensional augmented reality (AR) interfaces specifically designed to improve human performance during end-effector controlled teleoperations. A modular telerobotic test bed was developed for this purpose and several experiments were conducted. In the first experiment, the effect of camera placement on end-effector manipulation performance was evaluated. Results show that increasing misalignment between the displayed end-effector and hand-controller axes (display-control misalignments) increases the time required to process a movement input. Simple AR movement cues were found to mitigate the adverse effects of camera-based teleoperation and made performance invariant to misalignment. Applying these movement cues to payload transport tasks correspondingly demonstrated improvements in free-space navigation quality over conventional end-effector control using multiple cameras. Collision-free teleoperations are also a critical requirement in space. To help the operators guide robots safely, a novel method was evaluated. Navigation plans computed by a planning agent are presented to the operator sequentially through an AR interface. The plans in combination with the interface allow the operator to guide the end-effector through collision-free regions in the remote environment safely. Experimental results show significant benefits in control performance including reduced path deviation and travel distance. Overall, the results show that AR interfaces can improve performance during manual control of remote robots and have tremendous potential in current and future teleoperated space robotic systems; as well as in contemporary military and surgical applications

    A white paper: NASA virtual environment research, applications, and technology

    Get PDF
    Research support for Virtual Environment technology development has been a part of NASA's human factors research program since 1985. Under the auspices of the Office of Aeronautics and Space Technology (OAST), initial funding was provided to the Aerospace Human Factors Research Division, Ames Research Center, which resulted in the origination of this technology. Since 1985, other Centers have begun using and developing this technology. At each research and space flight center, NASA missions have been major drivers of the technology. This White Paper was the joint effort of all the Centers which have been involved in the development of technology and its applications to their unique missions. Appendix A is the list of those who have worked to prepare the document, directed by Dr. Cynthia H. Null, Ames Research Center, and Dr. James P. Jenkins, NASA Headquarters. This White Paper describes the technology and its applications in NASA Centers (Chapters 1, 2 and 3), the potential roles it can take in NASA (Chapters 4 and 5), and a roadmap of the next 5 years (FY 1994-1998). The audience for this White Paper consists of managers, engineers, scientists and the general public with an interest in Virtual Environment technology. Those who read the paper will determine whether this roadmap, or others, are to be followed

    EFFECTS OF AUGMENTED REALITY BASED OBJECT ILLUMINATION ON HUMAN PERFORMANCE

    Get PDF
    Extravehicular Activities (EVAs) in space are generally considered to be high-risk, costly activities, due to the nature of the working environment and the limitations imposed on astronaut mobility and dexterity. Procedures are scheduled out and rehearsed far in advance, with time being considered a precious commodity during missions. Providing artificial task guidance to astronauts could potentially improve their efficiency, enabling for shorter duration EVAs and/or a larger quantity of tasks completed. This research quantitatively measured the effects of virtually illuminating or “cueing” objects of interest on a user’s ability to complete a predefined task, through the use of augmented reality (AR) “active display” symbology. This was achieved through the implementation of a Microsoft HoloLens™ head mounted display. It was demonstrated that, after controlling for a variety of factors, virtual illumination techniques improved task completion speed by approximately 100% and reduced perceived mental workload, with no adverse effects on accuracy

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    Scene Modeling And Augmented Virtuality Interface For Telerobotic Satellite Servicing

    No full text
    Teleoperation in extreme environments can be hindered by limitations in telemetry and in operator perception of the remote environment. Often, the primary mode of perception is visual feedback from remote cameras, which do not always provide suitable views and are subject to telemetry delays. To address these challenges, we propose to build a model of the remote environment and provide an augmented virtuality visualization system that augments the model with projections of real camera images. The approach is demonstrated in a satellite servicing scenario, with a multisecond round-trip telemetry delay between the operator on Earth and the satellite on orbit. The scene modeling enables both virtual fixtures to assist the human operator and augmented virtuality visualization that allows the operator to teleoperate a virtual robot from a convenient virtual viewpoint, with the delayed camera images projected onto the three-dimensional model. Experiments on a ground-based telerobotic platform, with software-created telemetry delays, indicate that the proposed method leads to better teleoperation performance with 30% better blade alignment and 50% reduction in task execution time compared to the baseline case where visualization is restricted to the available camera views

    Scene Modeling and Augmented Virtuality Interface for Telerobotic Satellite Servicing

    No full text
    corecore