4 research outputs found

    Automatic Gaze Classification for Aviators: Using Multi-task Convolutional Networks as a Proxy for Flight Instructor Observation

    Get PDF
    In this work, we investigate how flight instructors observe aviator scan patterns and assign quality to an aviator\u27s gaze. We first establish the reliability of instructors to assign similar quality to an aviator\u27s scan patterns, and then investigate methods to automate this quality using machine learning. In particular, we focus on the classification of gaze for aviators in a mixed-reality flight simulation. We create and evaluate two machine learning models for classifying gaze quality of aviators: a task-agnostic model and a multi-task model. Both models use deep convolutional neural networks to classify the quality of pilot gaze patterns for 40 pilots, operators, and novices, as compared to visual inspection by three experienced flight instructors. Our multi-task model can automate the process of gaze inspection with an average accuracy of over 93.0% for three separate flight tasks. Our approach could assist existing flight instructors to provide feedback to learners, or it could open the door to more automated feedback for pilots learning to carry out different maneuvers

    Human Guidance Behavior Decomposition and Modeling

    Get PDF
    University of Minnesota Ph.D. dissertation. December 2017. Major: Aerospace Engineering. Advisor: Berenice Mettler. 1 computer file (PDF); x, 128 pages.Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics

    Modeling the Human Visuo-Motor System for Remote-Control Operation

    Get PDF
    University of Minnesota Ph.D. dissertation. 2018. Major: Computer Science. Advisors: Nikolaos Papanikolopoulos, Berenice Mettler. 1 computer file (PDF); 172 pages.Successful operation of a teleoperated miniature rotorcraft relies on capabilities including guidance, trajectory following, feedback control, and environmental perception. For many operating scenarios fragile automation systems are unable to provide adequate performance. In contrast, human-in-the-loop systems demonstrate an ability to adapt to changing and complex environments, stability in control response, high level goal selection and planning, and the ability to perceive and process large amounts of information. Modeling the perceptual processes of the human operator provides the foundation necessary for a systems based approach to the design of control and display systems used by remotely operated vehicles. In this work we consider flight tasks for remotely controlled miniature rotorcraft operating in indoor environments. Operation of agile robotic systems in three dimensional spaces requires a detailed understanding of the perceptual aspects of the problem as well as knowledge of the task and models of the operator response. When modeling the human-in-the-loop the dynamics of the vehicle, environment, and human perception-action are tightly coupled in space and time. The dynamic response of the overall system emerges from the interplay of perception and action. The main questions to be answered in this work are: i) what approach does the human operator implement when generating a control and guidance response? ii) how is information about the vehicle and environment extracted by the human? iii) can the gaze patterns of the pilot be decoded to provide information for estimation and control? In relation to existing research this work differs by focusing on fast acting dynamic systems in multiple dimensions and investigating how the gaze can be exploited to provide action-relevant information. To study human-in-the-loop systems the development and integration of the experimental infrastructure is described. Utilizing the infrastructure, a theoretical framework for computational modeling of the human pilot’s perception-action is proposed and verified experimentally. The benefits of the human visuo-motor model are demonstrated through application examples where the perceptual and control functions of a teleoperation system are augmented to reduce workload and provide a more natural human-machine interface
    corecore