407 research outputs found

    NASA space station automation: AI-based technology review. Executive summary

    Get PDF
    Research and Development projects in automation technology for the Space Station are described. Artificial Intelligence (AI) based technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Human-robot Interaction For Multi-robot Systems

    Get PDF
    Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi-robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that the robots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator’s concentration is divided not only among multiple robots but also between controlling each robot’s base and arm. This complexity substantially increases the potential neglect time, since the operator’s inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance. There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaces which reduce the operator’s workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelf parts. User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modiii eling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces

    Telescience testbedding: An implementation approach

    Get PDF
    Telescience is the term used to describe a concept being developed by NASA's Office of Space Science and Applications (OSSA) under the Science and Applications Information System (SAIS) Program. This concept focuses on the development of an ability for all OSSA users to be remotely interactive with all provided information system services for the Space Station era. This concept includes access to services provided by both flight and ground components of the system and emphasizes the accommodation of users from their home institutions. Key to the development of the telescience capability is an implementation approach called rapid-prototype testbedding. This testbedding is used to validate the concept and test the applicability of emerging technologies and operational methodologies. Testbedding will be used to first determine the feasibility of an idea and then the applicability to real science usage. Once a concept is deemed viable, it will be integrated into the operational system for real time support. It is believed that this approach will greatly decrease the expense of implementing the eventual system and will enhance the resultant capabilities of the delivered system

    Mixed-initiative Multirobot Control in USAR

    Get PDF

    Sharing and Trading in a Human-Robot System

    Get PDF

    Object-based task-level control: A hierarchical control architecture for remote operation of space robots

    Get PDF
    Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center

    Advancing automation and robotics technology for the Space Station Freedom and for the US economy

    Get PDF
    The progress made by levels 1, 2, and 3 of the Office of Space Station in developing and applying advanced automation and robotics technology is described. Emphasis is placed upon the Space Station Freedom Program responses to specific recommendations made in the Advanced Technology Advisory Committee (ATAC) progress report 10, the flight telerobotic servicer, and the Advanced Development Program. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for the Space Station Freedom

    Scalable target detection for large robot teams

    Get PDF
    In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operator's workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators. Copyright 2011 ACM
    corecore