198 research outputs found

    Commercialization of JPL Virtual Reality calibration and redundant manipulator control technologies

    Get PDF
    Within NASA's recent thrust for industrial collaboration, JPL (Jet Propulsion Laboratory) has recently established two technology cooperation agreements in the robotics area: one on virtual reality (VR) calibration with Deneb Robotics, Inc., and the other on redundant manipulator control with Robotics Research Corporation (RRC). These technology transfer cooperation tasks will enable both Deneb and RRC to commercialize enhanced versions of their products that will greatly benefit both space and terrestrial telerobotic applications

    Demonstration of a High-Fidelity Predictive/Preview Display Technique for Telerobotic Servicing in Space

    Get PDF
    A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task

    Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Get PDF
    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available

    Advanced teleoperation: Technology innovations and applications

    Get PDF
    The capability to remotely, robotically perform space assembly, inspection, servicing, and science functions would rapidly expand our presence in space, and the cost efficiency of being there. There is considerable interest in developing 'telerobotic' technologies, which also have comparably important terrestrial applications to health care, underwater salvage, nuclear waste remediation and other. Such tasks, both space and terrestrial, require both a robot and operator interface that is highly flexible and adaptive, i.e., capable of efficiently working in changing and often casually structured environments. One systems approach to this requirement is to augment traditional teleoperation with computer assists -- advanced teleoperation. We have spent a number of years pursuing this approach, and highlight some key technology developments and their potential commercial impact. This paper is an illustrative summary rather than self-contained presentation; for completeness, we include representative technical references to our work which will allow the reader to follow up items of particular interest

    Applying robotics to HAZMAT

    Get PDF
    The use of robotics in situations involving hazardous materials can significantly reduce the risk of human injuries. The Emergency Response Robotics Project, which began in October 1990 at the Jet Propulsion Laboratory, is developing a teleoperated mobile robot allowing HAZMAT (hazardous materials) teams to remotely respond to incidents involving hazardous materials. The current robot, called HAZBOT III, can assist in locating characterizing, identifying, and mitigating hazardous material incidents without risking entry team personnel. The active involvement of the JPL Fire Department HAZMAT team has been vital in developing a robotic system which enables them to perform remote reconnaissance of a HAZMAT incident site. This paper provides a brief review of the history of the project, discusses the current system in detail, and presents other areas in which robotics can be applied removing people from hazardous environments/operations

    Model Driven Robotic Assistance for Human-Robot Collaboration

    Get PDF
    While robots routinely perform complex assembly tasks in highly structured factory environments, it is challenging to apply completely autonomous robotic systems in less structured manipulation tasks, such as surgery and machine assembly/repair, due to the limitations of machine intelligence, sensor data interpretation and environment modeling. A practical, yet effective approach to accomplish these tasks is through human-robot collaboration, in which the human operator and the robot form a partnership and complement each other in performing a complex task. We recognize that humans excel at determining task goals and recognizing constraints, if given sufficient feedback about the interaction between the tool (e.g., end-effector of the robot) and the environment. Robots are precise, unaffected by fatigue and able to work in environments not suitable for humans. We hypothesize that by providing the operator with adequate information about the task, through visual and force (haptic) feedback, the operator can: (1) define the task model, in terms of task goals and virtual fixture constraints through an interactive, or immersive augmented reality interface, and (2) have the robot actively assist the operator to enhance the execution time, quality and precision of the tasks. We validate our approaches through the implementations of both cooperative (i.e., hands-on) control and telerobotic systems, for image-guided robotic neurosurgery and telerobotic manipulation tasks for satellite servicing under significant time delay

    Virtual and Mixed Reality in Telerobotics: A Survey

    Get PDF

    Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space 1994

    Get PDF
    The Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space (i-SAIRAS 94), held October 18-20, 1994, in Pasadena, California, was jointly sponsored by NASA, ESA, and Japan's National Space Development Agency, and was hosted by the Jet Propulsion Laboratory (JPL) of the California Institute of Technology. i-SAIRAS 94 featured presentations covering a variety of technical and programmatic topics, ranging from underlying basic technology to specific applications of artificial intelligence and robotics to space missions. i-SAIRAS 94 featured a special workshop on planning and scheduling and provided scientists, engineers, and managers with the opportunity to exchange theoretical ideas, practical results, and program plans in such areas as space mission control, space vehicle processing, data analysis, autonomous spacecraft, space robots and rovers, satellite servicing, and intelligent instruments

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface
    corecore