research

Human Automation Teaming: Lessons Learned and Future Directions

Abstract

Full autonomy seems to be the goal for system developers in almost every area of the economy. However, as we move from automated systems to autonomous systems, designers have needed to insert humans to oversee automation that has traditionally been brittle or incomplete. This creates its own problems as the operator is usually out of the loop when the automation hands over problems that it cannot handle. To better handle these situations, it has been proposed that we develop human automation teams that have shared goals and objectives to support task performance. This paper will describe an initial model of Human Automation Teaming (HAT) which has three elements: transparency, bi-directional communications, and human-directed execution. Transparency in our model is a method for giving insight into the reasoning behind automated recommendations and actions, bi-directional communication allows the operator to communicate directly with the automation, and finally the automation defers execution to the human. The model was implemented through a number of features on an electronic flight bag (EFB) which are described in the paper. The EFB was installed in a mid-fidelity flight simulator and used by 12 airline pilots to support diversion decisions during off-nominal flight scenarios. Pilots reported that working with the HAT automation made diversion decisions easier and reduced their workload. They also reported that the information provided about diversion airports was similar to what they would receive from ground dispatch, thus making coordination with dispatch easier and less time consuming. These HAT features engender more trust in the automation when appropriate, and less when not, allowing improved supervision of automated functions by flight crews

    Similar works