1,251 research outputs found

    Discrete Event Simulation of Distributed Team Communication Architecture

    Get PDF
    As the United States Department of Defense continues to increase the number of Remotely Piloted Aircraft (RPA) operations overseas, improved Human Systems Integration becomes increasingly important. RPA systems rely heavily on distributed team communications determined by systems architecture. Two studies examine the effects of systems architecture on operator workload of US Air Force MQ-1/9 operators. The first study ascertains the effects of communication modality changes on mental workload using the Improved Research Integration Pro (IMPRINT) software tool to estimate pilot workload. Allocation of communication between modalities minimizes workload. The second study uses IMPRINT to model Mission Intelligence Controllers (MICs) and the effect of the system architecture upon them. Four system configurations were simulated for four mission activity levels. Mental workload, monitoring time and the number of delayed tasks were estimated to determine the effect of changing system architecture parameters. Literature and MIC interviews provided parameters for the model. The analysis demonstrates that the proposed changes have significant effects which, in some conditions, bring the overall workload function toward a proposed theoretical optimum

    MULTI-MODAL TASK INSTRUCTIONS TO ROBOTS BY NAIVE USERS

    Get PDF
    This thesis presents a theoretical framework for the design of user-programmable robots. The objective of the work is to investigate multi-modal unconstrained natural instructions given to robots in order to design a learning robot. A corpus-centred approach is used to design an agent that can reason, learn and interact with a human in a natural unconstrained way. The corpus-centred design approach is formalised and developed in detail. It requires the developer to record a human during interaction and analyse the recordings to find instruction primitives. These are then implemented into a robot. The focus of this work has been on how to combine speech and gesture using rules extracted from the analysis of a corpus. A multi-modal integration algorithm is presented, that can use timing and semantics to group, match and unify gesture and language. The algorithm always achieves correct pairings on a corpus and initiates questions to the user in ambiguous cases or missing information. The domain of card games has been investigated, because of its variety of games which are rich in rules and contain sequences. A further focus of the work is on the translation of rule-based instructions. Most multi-modal interfaces to date have only considered sequential instructions. The combination of frame-based reasoning, a knowledge base organised as an ontology and a problem solver engine is used to store these rules. The understanding of rule instructions, which contain conditional and imaginary situations require an agent with complex reasoning capabilities. A test system of the agent implementation is also described. Tests to confirm the implementation by playing back the corpus are presented. Furthermore, deployment test results with the implemented agent and human subjects are presented and discussed. The tests showed that the rate of errors that are due to the sentences not being defined in the grammar does not decrease by an acceptable rate when new grammar is introduced. This was particularly the case for complex verbal rule instructions which have a large variety of being expressed

    Procedural-Reasoning Architecture for Applied Behavior Analysis-based Instructions

    Get PDF
    Autism Spectrum Disorder (ASD) is a complex developmental disability affecting as many as 1 in every 88 children. While there is no known cure for ASD, there are known behavioral and developmental interventions, based on demonstrated efficacy, that have become the predominant treatments for improving social, adaptive, and behavioral functions in children. Applied Behavioral Analysis (ABA)-based early childhood interventions are evidence based, efficacious therapies for autism that are widely recognized as effective approaches to remediation of the symptoms of ASD. They are, however, labor intensive and consequently often inaccessible at the recommended levels. Recent advancements in socially assistive robotics and applications of virtual intelligent agents have shown that children with ASD accept intelligent agents as effective and often preferred substitutes for human therapists. This research is nascent and highly experimental with no unifying, interdisciplinary, and integral approach to development of intelligent agents based therapies, especially not in the area of behavioral interventions. Motivated by the absence of the unifying framework, we developed a conceptual procedural-reasoning agent architecture (PRA-ABA) that, we propose, could serve as a foundation for ABA-based assistive technologies involving virtual, mixed or embodied agents, including robots. This architecture and related research presented in this disser- tation encompass two main areas: (a) knowledge representation and computational model of the behavioral aspects of ABA as applicable to autism intervention practices, and (b) abstract architecture for multi-modal, agent-mediated implementation of these practices

    The Effectiveness of Augmented Reality as a Facilitator of Information Acquisition in Aviation Maintenance Applications

    Get PDF
    Until recently, in the field of Augmented Reality (AR) little research attention has been paid to the cognitive benefits of this emerging technology. AR, the synthesis of computer images and text in the real world, affords a supplement to normal information acquisition that has yet to be fully explored and exploited. AR achieves a more smooth and seamless interface by complementing human cognitive networks, and aiding information integration through multimodal sensory elaboration (visual, verbal, proprioceptive, and tactile memory) while the user is performing real world tasks. AR also incorporates visuo-spatial ability, which involves the representations of spatial information in memory. The use of this type of information is an extremely powerful form of elaboration. This study examined four learning paradigms: print (printed material) mode, observe (video tape) mode, interact (text annotations activated by mouse interaction) mode, and select (AR) mode. The results of the experiment indicated that the select (AR) mode resulted in better learning and recall when compared to the other three conventional learning modes

    VISION-BASED URBAN NAVIGATION PROCEDURES FOR VERBALLY INSTRUCTED ROBOTS

    Get PDF
    The work presented in this thesis is part of a project in instruction based learning (IBL) for mobile robots were a robot is designed that can be instructed by its users through unconstrained natural language. The robot uses vision guidance to follow route instructions in a miniature town model. The aim of the work presented here was to determine the functional vocabulary of the robot in the form of "primitive procedures". In contrast to previous work in the field of instructable robots this was done following a "user-centred" approach were the main concern was to create primitive procedures that can be directly associated with natural language instructions. To achieve this, a corpus of human-to-human natural language instructions was collected and analysed. A set of primitive actions was found with which the collected corpus could be represented. These primitive actions were then implemented as robot-executable procedures. Natural language instructions are under-specified when destined to be executed by a robot. This is because instructors omit information that they consider as "commonsense" and rely on the listener's sensory-motor capabilities to determine the details of the task execution. In this thesis the under-specification problem is solved by determining the missing information, either during the learning of new routes or during their execution by the robot. During learning, the missing information is determined by imitating the commonsense approach human listeners take to achieve the same purpose. During execution, missing information, such as the location of road layout features mentioned in route instructions, is determined from the robot's view by using image template matching. The original contribution of this thesis, in both these methods, lies in the fact that they are driven by the natural language examples found in the corpus collected for the IDL project. During the testing phase a high success rate of primitive calls, when these were considered individually, showed that the under-specification problem has overall been solved. A novel method for testing the primitive procedures, as part of complete route descriptions, is also proposed in this thesis. This was done by comparing the performance of human subjects when driving the robot, following route descriptions, with the performance of the robot when executing the same route descriptions. The results obtained from this comparison clearly indicated where errors occur from the time when a human speaker gives a route description to the time when the task is executed by a human listener or by the robot. Finally, a software speed controller is proposed in this thesis in order to control the wheel speeds of the robot used in this project. The controller employs PI (Proportional and Integral) and PID (Proportional, Integral and Differential) control and provides a good alternative to expensive hardware
    corecore