560 research outputs found

    Unmanned Aerial Systems: Research, Development, Education & Training at Embry-Riddle Aeronautical University

    Get PDF
    With technological breakthroughs in miniaturized aircraft-related components, including but not limited to communications, computer systems and sensors, state-of-the-art unmanned aerial systems (UAS) have become a reality. This fast-growing industry is anticipating and responding to a myriad of societal applications that will provide new and more cost-effective solutions that previous technologies could not, or will replace activities that involved humans in flight with associated risks. Embry-Riddle Aeronautical University has a long history of aviation-related research and education, and is heavily engaged in UAS activities. This document provides a summary of these activities, and is divided into two parts. The first part provides a brief summary of each of the various activities, while the second part lists the faculty associated with those activities. Within the first part of this document we have separated UAS activities into two broad areas: Engineering and Applications. Each of these broad areas is then further broken down into six sub-areas, which are listed in the Table of Contents. The second part lists the faculty, sorted by campus (Daytona Beach-D, Prescott-P and Worldwide-W) associated with the UAS activities. The UAS activities and the corresponding faculty are cross-referenced. We have chosen to provide very short summaries of the UAS activities rather than lengthy descriptions. If more information is desired, please contact me directly, or visit our research website (https://erau.edu/research), or contact the appropriate faculty member using their e-mail address provided at the end of this document

    The Underpinnings of Workload in Unmanned Vehicle Systems

    Get PDF
    This paper identifies and characterizes factors that contribute to operator workload in unmanned vehicle systems. Our objective is to provide a basis for developing models of workload for use in design and operation of complex human-machine systems. In 1986, Hart developed a foundational conceptual model of workload, which formed the basis for arguably the most widely used workload measurement techniquethe NASA Task Load Index. Since that time, however, there have been many advances in models and factor identification as well as workload control measures. Additionally, there is a need to further inventory and describe factors that contribute to human workload in light of technological advances, including automation and autonomy. Thus, we propose a conceptual framework for the workload construct and present a taxonomy of factors that can contribute to operator workload. These factors, referred to as workload drivers, are associated with a variety of system elements including the environment, task, equipment and operator. In addition, we discuss how workload moderators, such as automation and interface design, can be manipulated in order to influence operator workload. We contend that workload drivers, workload moderators, and the interactions among drivers and moderators all need to be accounted for when building complex, human-machine systems

    An integrated task manager for virtual command and control

    Get PDF
    The Task Manager is a desktop/tablet PC interface to the Battlespace research project that provides interactions and displays for supervisory control of unmanned aerial vehicles. Utilizing a north-up map display, the Task Manager provides a direct-manipulation interface to the units involved in an engagement. Used in two primary modes, the Task Manager can be used either in a planning/review mode that can be used to generate mission scenarios or a live-streaming mode that connects to a live Battlespace simulation via a network connection to edit and update path information on the fly. The goal of this research is to combine the precision of 2D mouse and pen-based interaction with the increased situational awareness provided by 3D battlefield visualizations like the Battlespace application. Combined use of these interfaces, either by a single operator or a small team of operators with task-specific roles, is proposed to produce a more favorable ratio of operators to units in field operations with superior decision-making capabilities due to the specific nature of the interfaces

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots

    Assessing the Impact of Haptic Peripheral Displays for UAV Operators

    Get PDF
    Objectives: A pilot study was conducted to investigate the effectiveness of continuous haptic peripheral displays in supporting multiple UAV supervisory control. Background: Previous research shows that continuous auditory peripheral displays can enhance operator performance in monitoring events that are continuous in nature, such as monitoring how well UAVs stay on their pre-planned courses. This research also shows that auditory alerts can be masked by other auditory information. Command and control operations are generally performed in noisy environments with multiple auditory alerts presented to the operators. In order to avoid this masking problem, another potentially useful sensory channel for providing redundant information to UAV operators is the haptic channel. Method: A pilot experiment was conducted with 13 participants, using a simulated multiple UAV supervisory control task. All participants completed two haptic feedback conditions (continuous and threshold), where they received alerts based on UAV course deviations and late arrivals to targets. Results: Threshold haptic feedback was found to be more effective for late target arrivals, whereas continuous haptic feedback resulted in faster reactions to course deviations. Conclusions: Continuous haptic feedback appears to be more appropriate for monitoring events that are continuous in nature (i.e., how well a UAV keeps its course). In contrast, threshold haptic feedback appears to better support response to discrete events (i.e., late target arrivals). Future research: Because this is a pilot study, more research is needed to validate these preliminary findings. A direct comparison between auditory and haptic feedback is also needed to provide better insights into the potential benefits of multi-modal peripheral displays in command and control of multiple UAVs.Prepared for Charles River Analytics, Inc

    A Realistic Simulation for Swarm UAVs and Performance Metrics for Operator User Interfaces

    Get PDF
    Robots have been utilized to support disaster mitigation missions through exploration of areas that are either unreachable or hazardous for human rescuers [1]. The great potential for robotics in disaster mitigation has been recognized by the research community and during the last decade, a lot of research has been focused on developing robotic systems for this purpose. In this thesis, we present a description of the usage and classification of UAVs and performance metrics that affect controlling of UAVs. We also present new contributions to the UAV simulator developed by ECSL and RRL: the integration of flight dynamics of Hummingbird quadcopter, and distance optimization using a Genetic algorithm

    Future Roles for Autonomous Vertical Lift in Disaster Relief and Emergency Response

    Get PDF
    System analysis concepts are applied to the assessment of potential collaborative contributions of autonomous system and vertical lift (a.k.a. rotorcraft, VTOL, powered-lift, etc.) technologies to the important, and perhaps underemphasized, application domain of disaster relief and emergency response. In particular, an analytic framework is outlined whereby system design functional requirements for an application domain can be derived from defined societal good goals and objectives
    • …
    corecore