26 research outputs found

    PI-in-a-box: Intelligent onboard assistance for spaceborne experiments in vestibular physiology

    Get PDF
    In construction is a knowledge-based system that will aid astronauts in the performance of vestibular experiments in two ways: it will provide real-time monitoring and control of signals and it will optimize the quality of the data obtained, by helping the mission specialists and payload specialists make decisions that are normally the province of a principal investigator, hence the name PI-in-a-box. An important and desirable side-effect of this tool will be to make the astronauts more productive and better integrated members of the scientific team. The vestibular experiments are planned by Prof. Larry Young of MIT, whose team has already performed similar experiments in Spacelab missions SL-1 and D-1, and has experiments planned for SLS-1 and SLS-2. The knowledge-based system development work, performed in collaboration with MIT, Stanford University, and the NASA-Ames Research Center, addresses six major related functions: (1) signal quality monitoring; (2) fault diagnosis; (3) signal analysis; (4) interesting-case detection; (5) experiment replanning; and (6) integration of all of these functions within a real-time data acquisition environment. Initial prototyping work has been done in functions (1) through (4)

    Medics: Medical Decision Support System for Long-Duration Space Exploration

    Get PDF
    The Autonomous Medical Operations (AMO) group at NASA Ames is developing a medical decision support system to enable astronauts on long-duration exploration missions to operate autonomously. The system will support clinical actions by providing medical interpretation advice and procedural recommendations during emergent care and clinical work performed by crew. The current state of development of the system, called MedICS (Medical Interpretation Classification and Segmentation) includes two separate aspects: a set of machine learning diagnostic models trained to analyze organ images and patient health records, and an interface to ultrasound diagnostic hardware and to medical repositories. Three sets of images of different organs and medical records were utilized for training machine learning models for various analyses, as follows: 1. Pneumothorax condition (collapsed lung). The trained model provides a positive or negative diagnosis of the condition. 2. Carotid artery occlusion. The trained model produces a diagnosis of 5 different occlusion levels (including normal). 3. Ocular retinal images. The model extracts optic disc pixels (image segmentation). This is a precursor step for advanced autonomous fundus clinical evaluation algorithms to be implemented in FY20. 4. Medical health records. The model produces a differential diagnosis for any particular individual, based on symptoms and other health and demographic information. A probability is calculated for each of 25 most common conditions. The same model provides the likelihood of survival. All results are provided with a confidence level. Item 1 images were provided by the US Army and were part of a data set for the clinical treatment of injured battlefield soldiers. This condition is relevant to possible space mishaps, due to pressure management issues. Item 2 images were provided by Houston Methodist Hospital, and item 3 health records were acquired from the MIT laboratory of computational physiology. The machine learning technology utilized is deep multilayer networks (Deep Learning), and new models will continue to be produced, as relevant data is made available and specific health needs of astronaut crews are identified. The interfacing aspects of the system include a GUI for running the different models, and retrieving and storing data, as well as support for integration with an augmented reality (AR) system deployed at JSC by Tietronix Software Inc. (HoloLens). The AR system provides guidance for the placement of an ultrasound transducer that captures images to be sent to the MedICS system for diagnosis. The image captured and the associated diagnosis appear in the technicians AR visual display

    From Diagnosis to Action: An Automated Failure Advisor for Human Deep Space Missions

    Get PDF
    The major goal of current space system development at NASA is to enable human travel to deep space locations such as Mars and asteroids. At that distance, round trip communication with ground operators may take close to an hour, thus it becomes unfeasible to seek ground operator advice for problems that require immediate attention, either for crew safety or for activities that need to be performed at specific times for the attainment of scientific results. To achieve this goal, major reliance will need to be placed on automation systems capable of aiding the crew in detecting and diagnosing failures, assessing consequences of these failures, and providing guidance in repair activities that may be required. We report here on the most current step in the continuing development of such a system, and that is the addition of a Failure Response Advisor. In simple terms, we have a system in place the Advanced Caution and Warning System (ACAWS) to tell us what happened (failure diagnosis) and what happened because that happened (failure effects). The Failure Response Advisor will tell us what to do about it, how long until something must be done and why its important that something be done and will begin to approach the complex reasoning that is generally required for an optimal approach to automated system health management. This advice is based on the criticality and various timing elements, such as durations of activities and of component repairs, failure effects delay, and other factors. The failure advice is provided to operators (crew and mission controllers) together with the diagnostic and effects information. The operators also have the option to drill down for more information about the failure and the reasons for any suggested priorities

    Principal investigator in a box: Version 1.2 documentation

    Get PDF
    Principal Investigator (PI) in a box is a computer system designed to help optimize the scientific results of experiments that are performed in space. The system will assist the astronaut experimenters in the collection and analysis of experimental data, recognition and pursuit of 'interesting' results, optimal use of the time allocated to the experiment, and troubleshooting of the experiment apparatus. This document discusses the problems that motivate development of 'PI-in-a-box', and presents a high- level system overview and a detailed description of each of the modules that comprise the current version of the system

    PI in the sky: The astronaut science advisor on SLS-2

    Get PDF
    The Astronaut Science Advisor (ASA, also known as Principal-Investigator-in-a-Box) is an advanced engineering effort to apply expert systems technology to experiment monitoring and control. Its goal is to increase the scientific value of information returned from experiments on manned space missions. The first in-space test of the system will be in conjunction with Professor Larry Young's (MIT) vestibulo-ocular 'Rotating Dome' experiment on the Spacelab Life Sciences 2 mission (STS-58) in the Fall of 1993. In a cost-saving effort, off-the-shelf equipment was employed wherever possible. Several modifications were necessary in order to make the system flight-worthy. The software consists of three interlocking modules. A real-time data acquisition system digitizes and stores all experiment data and then characterizes the signals in symbolic form; a rule-based expert system uses the symbolic signal characteristics to make decisions concerning the experiment; and a highly graphic user interface requiring a minimum of user intervention presents information to the astronaut operator. Much has been learned about the design of software and user interfaces for interactive computing in space. In addition, we gained a great deal of knowledge about building relatively inexpensive hardware and software for use in space. New technologies are being assessed to make the system a much more powerful ally in future scientific research in space and on the ground

    A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    Get PDF
    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonom

    A System for Fault Management for NASA's Deep Space Habitat

    Get PDF
    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

    An Architecture to Enable Autonomous Control of Spacecraft

    Get PDF
    Autonomy is required for manned spacecraft missions distant enough that light-time communication delays make ground-based mission control infeasible. Presently, ground controllers develop a complete schedule of power modes for all spacecraft components based on a large number of factors. The proposed architecture is an early attempt to formalize and automate this process using on-vehicle computation resources. In order to demonstrate this architecture, an autonomous electrical power system controller and vehicle Mission Manager are constructed. These two components are designed to work together in order to plan upcoming load use as well as respond to unanticipated deviations from the plan. The communication protocol was developed using "paper" simulations prior to formally encoding the messages and developing software to implement the required functionality. These software routines exchange data via TCP/IP sockets with the Mission Manager operating at NASA Ames Research Center and the autonomous power controller running at NASA Glenn Research Center. The interconnected systems are tested and shown to be effective at planning the operation of a simulated quasi-steady state spacecraft power system and responding to unexpected disturbances

    AI's Philosophical Underpinnings: A Thinking Person's Walk through the Twists and Turns of Artificial Intelligence's Meandering Path

    No full text
    Few human endeavors can be viewed both as extremely successful and unsuccessful at the same time. This is typically the case when goals have not been well defined or have been shifting in time. This has certainly been true of Artificial Intelligence (AI). The nature of intelligence has been the object of much thought and speculation throughout the history of philosophy. It is in the nature of philosophy that real headway is sometimes made only when appropriate tools become available. Similarly the computer, coupled with the ability to program (at least in principle) any function, appeared to be the tool that could tackle the notion of intelligence. To suit the tool, the problem of the nature of intelligence was soon sidestepped in favor of this notion: If a probing conversation with a computer could not be distinguished from a conversation with a human, then AI had been achieved. This notion became known as the Turing test, after the mathematician Alan Turing who proposed it in 1950. Conceptually rich and interesting, these early efforts gave rise to a large portion of the field's framework. Key to AI, rather than the 'number crunching' typical of computers until then, was viewed as the ability to manipulate symbols and make logical inferences. To facilitate these tasks, AI languages such as LISP and Prolog were invented and used widely in the field. One idea that emerged and enabled some success with real world problems was the notion that 'most intelligence' really resided in knowledge. A phrase attributed to Feigenbaum, one of the pioneers, was 'knowledge is the power.' With this premise, the problem is shifted from 'how do we solve problems' to 'how do we represent knowledge.' A good knowledge representation scheme could allow one to draw conclusions from given premises. Such schemes took forms such as rules,frames and scripts. It allowed the building of what became known as expert systems or knowledge based systems (KBS)
    corecore