3 research outputs found

    Application of Human-Autonomy Teaming (HAT) Patterns to Reduce Crew Operations (RCO)

    Get PDF
    Unmanned aerial systems, advanced cockpits, and air traffic management are all seeing dramatic increases in automation. However, while automation may take on some tasks previously performed by humans, humans will still be required to remain in the system for the foreseeable future. The collaboration between humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. This paper applies a methodology for identifying HAT patterns to an advanced cockpit project

    Application of Human-Autonomy Teaming (HAT) Patterns to Reduce Crew Operations (RCO)

    Get PDF
    Unmanned aerial systems, robotics, advanced cockpits, and air traffic management are all examples of domains that are seeing dramatic increases in automation. While automation may take on some tasks previously performed by humans, humans will still be required, for the foreseeable future, to remain in the system. The collaboration with humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. This paper applies a methodology for identifying HAT patterns to an advanced cockpit project

    Speech understanding in open tasks

    No full text
    The Air Traffic Information Service task is currently used by DARPA as a common evaluation task for Spoken Language Systems. This task is an example of open type tasks. Subjects are given a task and allowed to interact spontaneously with the system by voice. There is no fixed lexicon or grammar, and subjects are likely to exceed those used by any given system. In order to evaluate system performance on such tasks, a common corpus of training data has been gathered and annotated. An independent test corpus was also created in a similar fashion. This paper explains the techniques used in our system and the performance results on the standard set of tests used to evaluate systems
    corecore