139 research outputs found

    Operator vision aids for space teleoperation assembly and servicing

    Get PDF
    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed

    Remote Programming of Multirobot Systems within the UPC-UJI Telelaboratories: System Architecture and Agent-Based Multirobot Control

    Get PDF
    One of the areas that needs further improvement within E-Learning environments via Internet (A big effort is required in this area if progress is to be made) is allowing students to access and practice real experiments in a real laboratory, instead of using simulations [1]. Real laboratories allow students to acquire methods, skills and experience related to real equipment, in a manner that is very close to the way they are being used in industry. The purpose of the project is the study, development and implementation of an E-Learning environment to allow undergraduate students to practice subjects related to Robotics and Artificial Intelligence. The system, which is now at a preliminary stage, will allow the remote experimentation with real robotic devices (i.e. robots, cameras, etc.). It will enable the student to learn in a collaborative manner (remote participation with other students) where it will be possible to combine the onsite activities (performed “in-situ” within the real lab during the normal practical sessions), with the “on-line” one (performed remotely from home via the Internet). Moreover, the remote experiments within the E-Laboratory to control the real robots can be performed by both, students and even scientist. This project is under development and it is carried out jointly by two Universities (UPC and UJI). In this article we present the system architecture and the way students and researchers have been able to perform a Remote Programming of Multirobot Systems via web

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 344)

    Get PDF
    This bibliography lists 125 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during January, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    a general framework for shared control in robot teleoperation with force and visual feedback

    Get PDF
    In the last decade, the topic of human robot interaction has received increasing interest from research and industry, as robots must now interface with human users to accomplish complex tasks. In this scenario, robotics engineers are required to take the human component into account in the robot design and control. This is especially true in telerobotics, where interaction with the user plays an important role in the controlled system stability. By means of a thorough analysis and practical experiments, this contribution aims at giving a concrete idea of the aspects that need to be considered in the design of a complete control framework for teleoperated systems, that are able to seamlessly integrate with a human operator through shared control

    Automated visual direction : LDRD 38623 final report.

    Full text link

    Visually guided object grasping

    Full text link

    An intelligent, free-flying robot

    Get PDF
    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base

    Elderly Assist Robot

    Get PDF
    This project aimed to create a robot capable of assisting elderly people with tasks in their everyday lives. The project focused on the design, simulation, and the implementation of a mobile robotic base with an attached robotic arm. The project culminated in a prototype robot capable of performing basic chassis and arm control which can be used as a platform for future development

    Haptic-Based Shared-Control Methods for a Dual-Arm System

    Get PDF
    We propose novel haptic guidance methods for a dual-arm telerobotic manipulation system, which are able to deal with several different constraints, such as collisions, joint limits, and singularities. We combine the haptic guidance with shared-control algorithms for autonomous orientation control and collision avoidance meant to further simplify the execution of grasping tasks. The stability of the overall system in various control modalities is presented and analyzed via passivity arguments. In addition, a human subject study is carried out to assess the effectiveness and applicability of the proposed control approaches both in simulated and real scenarios. Results show that the proposed haptic-enabled shared-control methods significantly improve the performance of grasping tasks with respect to the use of classic teleoperation with neither haptic guidance nor shared control
    corecore