76 research outputs found
Contextual Inquiry of a Major US Airline Systems Operation Center
A contextual inquiry was conducted at the airline’s Systems Operations Control (SOC)
from the 13-15th of November 2006. A total of 26 hours of direct observation were conducted
with various members of the SOC Staff including several of the Operations Coordinators,
the ATC Coordinators, and the Operations Manager. During the inquiry a wide
variety of situations occurred: unscheduled maintenance delays, estimated ready time slips,
multiple hub ground delay programs, severely reduced arrival rates due to cross-directional
winds, ground delay program revisions, and diversions of international flights.
The vast majority of these situations were handled as if they were no different from routine
operations; however, there were moments when the key SOC personnel were fully involved
in the situation and the normal coordination and collaboration between the ATCCs,
OCs, MOC and crew coordinators reverted to top down command and control. Thus the
workload is not evenly distributed across all SOC personnel because of the geographic
distribution of responsibilities. In addition to these observations this inquiry identified
three issues with specific design implications, all centered around the OC’s work practices:
overly involved coordination sessions with MOC, lack of control of printer output, and the
use of schedule printouts as a primary source of solution information.
All three of these issues lead to inefficiencies in the SOC operation, despite which, however,
the SOC in general and the OCs in particular are able to remain effective. This report
suggests that the OCs could become more efficient by shedding some of their printer maintenance
tasks, extended MOC coordination sessions, and more effectively using software
tools. In order to achieve this high level of effectiveness the SOC personnel actively adapt
their roles and the balance of power depending on the level of operational disruption. With
the addition of an MOC representative in the SOC or the availability of key maintenancerelated
scheduling data, increased effectiveness may also be achievable under conditions of
limited disruption. Changing the flow of messages from the printer to an on-screen system
will help minimize the ‘busy’ work associated with maintaining the printer and keeping up
with the printouts. Introducing new hardware and software tools to aid with the schedule
sorting and filtering may also provide increased efficiency, especially for the more junior
OCs
Converging Measures and an Emergent Model: A Meta-Analysis of Human-Automation Trust Questionnaires
A significant challenge to measuring human-automation trust is the amount of
construct proliferation, models, and questionnaires with highly variable
validation. However, all agree that trust is a crucial element of technological
acceptance, continued usage, fluency, and teamwork. Herein, we synthesize a
consensus model for trust in human-automation interaction by performing a
meta-analysis of validated and reliable trust survey instruments. To accomplish
this objective, this work identifies the most frequently cited and
best-validated human-automation and human-robot trust questionnaires, as well
as the most well-established factors, which form the dimensions and antecedents
of such trust. To reduce both confusion and construct proliferation, we provide
a detailed mapping of terminology between questionnaires. Furthermore, we
perform a meta-analysis of the regression models that emerged from those
experiments which used multi-factorial survey instruments. Based on this
meta-analysis, we demonstrate a convergent experimentally validated model of
human-automation trust. This convergent model establishes an integrated
framework for future research. It identifies the current boundaries of trust
measurement and where further investigation is necessary. We close by
discussing choosing and designing an appropriate trust survey instrument. By
comparing, mapping, and analyzing well-constructed trust survey instruments, a
consensus structure of trust in human-automation interaction is identified.
Doing so discloses a more complete basis for measuring trust emerges that is
widely applicable. It integrates the academic idea of trust with the
colloquial, common-sense one. Given the increasingly recognized importance of
trust, especially in human-automation interaction, this work leaves us better
positioned to understand and measure it.Comment: 44 pages, 6 figures. Submitted, in part, to ACM Transactions on
Human-Robot Interaction (THRI
Formal Modeling and Analysis for Interactive Hybrid Systems
An effective strategy for discovering certain kinds of automation
surprise and other problems in interactive systems is to build models
of the participating (automated and human) agents and then explore all
reachable states of the composed system looking for divergences
between mental states and those of the automation. Various kinds of
model checking provide ways to automate this approach when the agents
can be modeled as discrete automata. But when some of the agents are
continuous dynamical systems (e.g., airplanes), the composed model is
a hybrid (i.e., mixed continuous and discrete) system and these are
notoriously hard to analyze.
We describe an approach for very abstract modeling of hybrid systems
using relational approximations and their automated analysis using
infinite bounded model checking supported by an SMT solver. When
counterexamples are found, we describe how additional constraints can
be supplied to direct counterexamples toward plausible scenarios that
can be confirmed in high-fidelity simulation. The approach is
illustrated though application to a known (and now corrected)
human-automation interaction problem in Airbus aircraft
Design of Support Systems for Dynamic Decision Making in Airline Operations
Presented at the Institute of Electrical and Electronics Engineers (IEEE) Systems and Information Engineering Design Symposium, Charlottesville, Virginia, April, 2006 and published in the Proceedings of the 2006 IEEE Systems and Information Engineering Design Symposium. ©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.To date, there has been very little research conducted on the design of support systems for dynamic decisions environments, such as airline operations. The paper discusses
the idea that the regulation of dynamic systems has implications for both "internal" and "external" dynamic systems with respect
to the human operator. Hollnagel's Contextual Control Modes are suggested as a framework for designing such support
systems, noting that they can identify requirements specific to different contextual control modes
LTL-D*: Incrementally Optimal Replanning for Feasible and Infeasible Tasks in Linear Temporal Logic Specifications
This paper presents an incremental replanning algorithm, dubbed LTL-D*, for
temporal-logic-based task planning in a dynamically changing environment.
Unexpected changes in the environment may lead to failures in satisfying a task
specification in the form of a Linear Temporal Logic (LTL). In this study, the
considered failures are categorized into two classes: (i) the desired LTL
specification can be satisfied via replanning, and (ii) the desired LTL
specification is infeasible to meet strictly and can only be satisfied in a
"relaxed" fashion. To address these failures, the proposed algorithm finds an
optimal replanning solution that minimally violates desired task
specifications. In particular, our approach leverages the D* Lite algorithm and
employs a distance metric within the synthesized automaton to quantify the
degree of the task violation and then replan incrementally. This ensures plan
optimality and reduces planning time, especially when frequent replanning is
required. Our approach is implemented in a robot navigation simulation to
demonstrate a significant improvement in the computational efficiency for
replanning by two orders of magnitude.Comment: 8 pages,9 figure
Modeling the Work of Humans and Automation in Complex Operations
Humans have always been the vital components of complex operations, notably including aviation. They remain so even as sophisticated automation systems are introduced, changing -but not eliminating -the role of the human relative to the collective work required to achieve mission performance. Automation designers and certification agencies are interested in methods to predict and model how complex operations can be performed by teams of humans and automated agents. This paper proposes that the combined activities of both human and automation required by a proposed design can best be captured by focusing on modeling the work inherent to a complex operation. As a fundamental first step, the overall concept of operations spanning all the work activities can be examined for its feasibility in nominal and off-nominal conditions. These activities can then also be examined to see whether the demands they place upon the human agents in the system are feasible and facilitate the human's ability to contribute, rather than assuming unreasonable situations such as excessive workload, boredom, incoherent task descriptions, excessive monitoring requirements, etc. Further, trade-offs in distributing these activities across agents (both human and automated) can be evaluated in terms of task-interleaving created by the distribution of activity and in terms of the 'interaction overhead' associated with communication and coordination between agents required for a given distribution. A description of a modeling and simulation framework capable of modeling work is provided along with an analysis framework to evaluate proposed complex operations
Example of a Complementary Use of Model Checking and Agent-Based Simulation
Abstract—To identify problems that may arise between pilots and automation, methods are needed that can uncover potential problems with automation early in the design process. Such potential problems include automation surprises, which describe events when pilots are surprised by the actions of the automation. In this work, agent-based, hybrid time simulation and model checking are combined and their respective advantages leveraged in an original manner to find problematic human-automation interaction (HAI) early in the design process. The Tarom 381 incident involving the former Airbus automatic speed protection logic, leading to an automation surprise, was used as a common case study for both methodology validation and further analysis. Results of this case study show why model checking alone has difficulty analyzing such systems and how the incorporation of simulation can be used in a complementary fashion. The results indicate that the method is suitable to examine problematic HAI, such as automation surprises, allowing automation designers to improve their design. Index Terms—simulation, model checking, automation sur-prise, mental model, formal methods I
Operational Assessment of Apollo Lunar Surface Extravehicular Activity
Quantifying the operational variability of extravehicular activity (EVA) execution is critical to help design and build future support systems to enable astronauts to monitor and manage operations in deep-space, where ground support operators will no longer be able to react instantly and manage execution deviations due to the significant communication latency. This study quantifies the operational variability exhibited during Apollo 14-17 lunar surface EVA operations to better understand the challenges and natural tendencies of timeline execution and life support system performance involved in surface operations. Each EVA (11 in total) is individually summarized as well as aggregated to provide descriptive trends exhibited throughout the Apollo missions. This work extends previous EVA task analyses by calculating deviations between planned and as-performed timelines as well as examining metabolic rate and consumables usage throughout the execution of each EVA. The intent of this work is to convey the natural variability of EVA operations and to provide operational context for coping with the variability inherent to EVA execution as a means to support future concepts of operations
Recommended from our members
Supporting Multiple Cognitive Processing Styles Using Tailored Support Systems
According to theories of cognitive processing style or cognitive control mode, human performance is more effective when an individual’s cognitive state (e.g., intuition/scramble vs. deliberate/strategic) matches his/her ecological constraints or context (e.g., utilize intuition to strive for a "good-enough" response instead of deliberating for the "best" response under high time pressure). Ill-mapping between cognitive state and ecological constraints are believed to lead to degraded task performance. Consequently, incorporating support systems which are designed to specifically address multiple cognitive and functional states e.g., high workload, stress, boredom, and initiate appropriate mitigation strategies (e.g., reduce information load) is essential to reduce plant risk. Utilizing the concept of Cognitive Control Models, this paper will discuss the importance of tailoring support systems to match an operator's cognitive state, and will further discuss the importance of these ecological constraints in selecting and implementing mitigation strategies for safe and effective system performance. An example from the nuclear power plant industry illustrating how a support system might be tailored to support different cognitive states is included
- …