32 research outputs found

    Information for the user in design of intelligent systems

    Get PDF
    Recommendations are made for improving intelligent system reliability and usability based on the use of information requirements in system development. Information requirements define the task-relevant messages exchanged between the intelligent system and the user by means of the user interface medium. Thus, these requirements affect the design of both the intelligent system and its user interface. Many difficulties that users have in interacting with intelligent systems are caused by information problems. These information problems result from the following: (1) not providing the right information to support domain tasks; and (2) not recognizing that using an intelligent system introduces new user supervisory tasks that require new types of information. These problems are especially prevalent in intelligent systems used for real-time space operations, where data problems and unexpected situations are common. Information problems can be solved by deriving information requirements from a description of user tasks. Using information requirements embeds human-computer interaction design into intelligent system prototyping, resulting in intelligent systems that are more robust and easier to use

    Making intelligent systems team players: Overview for designers

    Get PDF
    This report is a guide and companion to the NASA Technical Memorandum 104738, 'Making Intelligent Systems Team Players,' Volumes 1 and 2. The first two volumes of this Technical Memorandum provide comprehensive guidance to designers of intelligent systems for real-time fault management of space systems, with the objective of achieving more effective human interaction. This report provides an analysis of the material discussed in the Technical Memorandum. It clarifies what it means for an intelligent system to be a team player, and how such systems are designed. It identifies significant intelligent system design problems and their impacts on reliability and usability. Where common design practice is not effective in solving these problems, we make recommendations for these situations. In this report, we summarize the main points in the Technical Memorandum and identify where to look for further information

    Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems

    Get PDF
    In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool

    Assessment of Alternative Interfaces for Manual Commanding of Spacecraft Systems: Compatibility with Flexible Allocation Policies

    Get PDF
    Astronauts will be responsible for executing a much larger body of procedures as human exploration moves further from Earth and Mission Control. Efficient, reliable methods for executing these procedures, including manual, automated, and mixed execution will be important. Our interface integrates step-by-step instruction with the means for execution. The research reported here compared manual execution using the new system to a system analogous to the manual-only system currently in use on the International Space Station, to assess whether user performance in manual operations would be as good or better with the new than with the legacy system. The system used also allows flexible automated execution. The system and our data lay the foundation for integrating automated execution into the flow of procedures designed for humans. In our formative study, we found speed and accuracy of manual procedure execution was better using the new, integrated interface over the legacy design

    Human Supervision of Robotic Site Surveys

    Get PDF

    Making intelligent systems team players. A guide to developing intelligent monitoring systems

    Get PDF
    This reference guide for developers of intelligent monitoring systems is based on lessons learned by developers of the DEcision Support SYstem (DESSY), an expert system that monitors Space Shuttle telemetry data in real time. DESSY makes inferences about commands, state transitions, and simple failures. It performs failure detection rather than in-depth failure diagnostics. A listing of rules from DESSY and cue cards from DESSY subsystems are included to give the development community a better understanding of the selected model system. The G-2 programming tool used in developing DESSY provides an object-oriented, rule-based environment, but many of the principles in use here can be applied to any type of monitoring intelligent system. The step-by-step instructions and examples given for each stage of development are in G-2, but can be used with other development tools. This guide first defines the authors' concept of real-time monitoring systems, then tells prospective developers how to determine system requirements, how to build the system through a combined design/development process, and how to solve problems involved in working with real-time data. It explains the relationships among operational prototyping, software evolution, and the user interface. It also explains methods of testing, verification, and validation. It includes suggestions for preparing reference documentation and training users

    Making intelligent systems team players: Additional case studies

    Get PDF
    Observations from a case study of intelligent systems are reported as part of a multi-year interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. A series of studies were conducted to investigate issues in designing intelligent fault management systems in aerospace applications for effective human-computer interaction. The results of the initial study are documented in two NASA technical memoranda: TM 104738 Making Intelligent Systems Team Players: Case Studies and Design Issues, Volumes 1 and 2; and TM 104751, Making Intelligent Systems Team Players: Overview for Designers. The objective of this additional study was to broaden the investigation of human-computer interaction design issues beyond the focus on monitoring and fault detection in the initial study. The results of this second study are documented which is intended as a supplement to the original design guidance documents. These results should be of interest to designers of intelligent systems for use in real-time operations, and to researchers in the areas of human-computer interaction and artificial intelligence

    Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    Get PDF
    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design

    Results from Testing Crew-Controlled Surface Telerobotics on the International Space Station

    Get PDF
    During Summer 2013, the Intelligent Robotics Group at NASA Ames Research Center conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover. The tests simulated portions of a proposed lunar mission, in which an astronaut in lunar orbit would remotely operate a planetary rover to deploy a radio telescope on the lunar far side. Over the course of Expedition 36, three ISS astronauts remotely operated the NASA "K10" planetary rover in an analogue lunar terrain located at the NASA Ames Research Center in California. The astronauts used a "Space Station Computer" (crew laptop), a combination of supervisory control (command sequencing) and manual control (discrete commanding), and Ku-band data communications to command and monitor K10 for 11 hours. In this paper, we present and analyze test results, summarize user feedback, and describe directions for future research

    The Planning Execution Monitoring Architecture

    Get PDF
    The Planning Execution Monitoring (PEM) architecture is a design concept for developing autonomous cockpit command and control software. The PEM architecture is designed to reduce the operations costs in the space transportation system through the use of automation while improving safety and operability of the system. Specifically, the PEM autonomous framework enables automatic performance of many vehicle operations that would typically be performed by a human. Also, this framework supports varying levels of autonomous control, ranging from fully automatic to fully manual control. The PEM autonomous framework interfaces with the core flight software to perform flight procedures. It can either assist human operators in performing procedures or autonomously execute routine cockpit procedures based on the operational context. Most importantly, the PEM autonomous framework promotes and simplifies the capture, verification, and validation of the flight operations knowledge. Through a hierarchical decomposition of the domain knowledge, the vehicle command and control capabilities are divided into manageable functional "chunks" that can be captured and verified separately. These functional units, each of which has the responsibility to manage part of the vehicle command and control, are modular, re-usable, and extensible. Also, the functional units are self-contained and have the ability to plan and execute the necessary steps for accomplishing a task based upon the current mission state and available resources. The PEM architecture has potential for application outside the realm of spaceflight, including management of complex industrial processes, nuclear control, and control of complex vehicles such as submarines or unmanned air vehicles
    corecore