268 research outputs found

    Functional requirements for the man-vehicle systems research facility

    Get PDF
    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included

    Close Formation Flight Missions Using Vision-Based Position Detection System

    Get PDF
    In this thesis, a formation flight architecture is described along with the implementation and evaluation of a state-of-the-art vision-based algorithm for solving the problem of estimating and tracking a leader vehicle within a close-formation configuration. A vision-based algorithm that uses Darknet architecture and a formation flight control law to track and follow a leader with desired clearance in forward, lateral directions are developed and implemented. The architecture is run on a flight computer that handles the process in real-time while integrating navigation sensors and a stereo camera. Numerical simulations along with indoor and outdoor actual flight tests demonstrate the capabilities of detection and tracking by providing a low cost, compact size and low weight solution for the problem of estimating the location of other cooperative or non-cooperative flying vehicles within a formation architecture

    Tracking Visual Scanning Techniques in Training Simulation for Helicopter Landing

    Get PDF
    Research has shown no consistent findings about how scanning techniques differ between experienced and inexperienced helicopter pilots depending on mission demands. To explore this question, 33 military pilots performed two different landing maneuvers in a flight simulator. The data included scanning data (eye tracking) as well as performance, workload, and a self-assessment of scanning techniques (interviews). Fifty-four percent of scanning-related differences between pilots resulted from the factor combination of expertise and mission demands. A comparison of eye tracking and interview data revealed that pilots were not always clearly aware of their actual scanning techniques. Eye tracking as a feedback tool for pilots offers a new opportunity to substantiate their training as well as research interests within the German Armed Forces

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    Virtual Cockpit Instruments - How Head-Worn Displays Can Enhance the Obstacle Awareness of Helicopter Pilots

    Get PDF
    The rise of augmented reality glasses and related technologies offers new possibilities for the human-machine interface design of future aircraft. Today, head-worn displays (HWDs) are mainly used by military pilots, for instance by helicopter crews for low-visibility operations close to ground and obstacles. Nevertheless, recent technological advances in this area allow the prediction that these systems could become available for more pilots in the future. This article presents a concept how state-of-the-art HWD symbology can be expanded to get even more out of the advantages of this technology. With so-called "virtual cockpit instruments" (VCIs), an HWD can show information, which is conventionally rendered on panel-mounted displays. These VCIs can be imagined as virtual display screens, which can be positioned freely around the pilot. Their major benefit is that they create many new options for the design of a flexible, situation-adaptive cockpit environment. This article introduces the general concept and presents several options how such an approach can be put into practice. Here, the concept is applied to helicopter operations in offshore windparks. We implemented a VCI-adapted obstacle awareness display and assessed a set of positioning variants for the new VCI. Two simulator studies -- with 11 and 7 participants -- provide interesting insights on the realization of this concept. In addition to high subjective ratings, the VCI significantly increased the pilot's head-up, eyes-out time -- an important measure for challenging maneuvers close to obstacles. Overall, this article illustrates a promising concept for the human-machine interface design of future cockpits and discusses its potentials and limitations

    Helicopter Handling Qualities

    Get PDF
    Helicopters are used by the military and civilian communities for a variety of tasks and must be capable of operating in poor weather conditions and at night. Accompanying extended helicopter operations is a significant increase in pilot workload and a need for better handling qualities. An overview of the status and problems in the development and specification of helicopter handling-qualities criteria is presented. Topics for future research efforts by government and industry are highlighted

    Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning

    Full text link
    We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that builds an explicit semantic map in the world reference frame by incorporating a pinhole camera projection model within the network. The information stored in the map is learned from experience, while the local-to-world transformation is computed explicitly. We train the model using DAggerFM, a modified variant of DAgger that trades tabular convergence guarantees for improved training speed and memory use. We test GSMN in virtual environments on a realistic quadcopter simulator and show that incorporating an explicit mapping and grounding modules allows GSMN to outperform strong neural baselines and almost reach an expert policy performance. Finally, we analyze the learned map representations and show that using an explicit map leads to an interpretable instruction-following model.Comment: To appear in Robotics: Science and Systems (RSS), 201

    Proceedings, MSVSCC 2012

    Get PDF
    Proceedings of the 6th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2012 at VMASC in Suffolk, Virginia

    Evaluation of a Head-Mounted Display and Advanced Flight Control Laws for Helicopter Ship Deck Landing

    Get PDF
    Within the maritime environment, helicopters can be used for a wide variety of missions including rescue missions, transport of personnel and material as well as for surveillance and reconnaissance. To perform such tasks on open sea and to expand the onshore refueling range, ship deck landings are necessary. Adverse weather conditions, such as high winds, fog and precipitation lead to strong ship movements and create a turbulent environment on the ship's landing deck. Combined with few visual cues, ship deck operations put a high workload on pilots which can compromise flight safety. To support pilots during ship deck operations a symbology concept was integrated into the previously developed head-mounted display (HMD) based on a Microsoft HoloLens 2. Three advanced flight control modes were developed for the approach phase. Results from a simulator campaign with pilots in a realistic scenario indicate that the handling qualities can degrade with the HMD and only the relative translational rate command (RTRC) is suited as advanced control mode for ship deck operation
    corecore