979 research outputs found

    Progress towards Automated Human Factors Evaluation

    Get PDF
    Cao, S. (2015). Progress towards Automated Human Factors Evaluation. 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, 3, 4266–4272. https://doi.org/10.1016/j.promfg.2015.07.414 This work is made available through a CC-BY-NC-ND 4.0 license. The licensor is not represented as endorsing the use made of this work. https://creativecommons.org/licenses/by-nc-nd/4.0/Human factors tests are important components of systems design. Designers need to evaluate users’ performance and workload while using a system and compare different design options to determine the optimal design choice. Currently, human factors evaluation and tests mainly rely on empirical user studies, which add a heavy cost to the design process. In addition, it is difficult to conduct comprehensive user tests at early design stages when no physical interfaces have been implemented. To address these issues, I develop computational human performance modeling techniques that can simulate users’ interaction with machine systems. This method uses a general cognitive architecture to computationally represent human cognitive capabilities and constraints. Task-specific models can be built with the specifications of user knowledge, user strategies, and user group differences. The simulation results include performance measures such as task completion time and error rate as well as workload measures. Completed studies have modeled multitasking scenarios in a wide range of domains, including transportation, healthcare, and human-computer interaction. The success of these studies demonstrated the modeling capabilities of this method. Cognitive-architecture-based models are useful, but building a cognitive model itself can be difficult to learn and master. It usually requires at least medium-level programming skills to understand and use the language and syntaxes that specify the task. For example, to build a model that simulates a driving task, a modeler needs to build a driving simulation environment so that the model can interact with the simulated vehicle. In order to simply this process, I have conducted preliminary programming work that directly connects the mental model to existing task environment simulation programs. The model will be able to directly obtain perceptual information from the task program and send control commands to the task program. With cognitive model-based tools, designers will be able to see the model performing the tasks in real-time and obtain a report of the evaluation. Automated human factors evaluation methods have tremendous value to support systems design and evaluatio

    Queueing Network Modeling of Human Performance and Mental Workload in Perceptual-Motor Tasks.

    Full text link
    Integrated with the mathematical modeling approaches, this thesis uses Queuing Network-Model Human Processors (QN-MHP) as a simulation platform to quantify human performance and mental workload in four representative perceptual-motor tasks with both theoretical and practical importance: discrete perceptual-motor tasks (transcription typing and psychological refractory period) and continuous perceptual-motor tasks (visual-manual tracking and vehicle steering with secondary tasks). The properties of queuing networks (queuing/waiting in processing information, serial and parallel information processing capability, overall mathematical structure, and entity-based network arrangement) allow QN-MHP to quantify several important aspects of the perceptual-motor tasks and unify them into one cognitive architecture. In modeling the discrete perceptual-motor task in a single task situation (transcription typing), QN-MHP quantifies and unifies 32 transcription typing phenomena involving many aspects of human performance--interkey time, typing units and spans, typing errors, concurrent task performance, eye movements, and skill effects, providing an alternative way to model this basic and common activities in human-machine interaction. In quantifying the discrete perceptual-motor task in a dual-task situation (psychological refractory period), the queuing network model is able to account for various experimental findings in PRP including all of these major counterexamples of existing models with less or equal number of free parameters and no need to use task-specific lock/unlock assumptions, demonstrating its unique advantages in modeling discrete dual-task performance. In modeling the human performance and mental workload in the continuous perceptual-motor tasks (visual-manual tracking and vehicle steering), QN-MHP is used as a simulation platform and a set of equations is developed to establish the quantitative relationships between queuing networks (e.g., subnetwork s utilization and arrival rate) and P300 amplitude measured by ERP techniques and subjective mental workload measured by NASA-TLX, predicting and visualizing mental workload in real-time. Moreover, this thesis also applies QN-MHP into the design of an adaptive workload management system in vehicles and integrates QN-MHP with scheduling methods to devise multimodal in-vehicle systems. Further development of the cognitive architecture in theory and practice is also discussed.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/55678/2/changxuw_1.pd

    Queueing Network Modeling of Human Performance in Complex Cognitive Multi-task Scenarios.

    Full text link
    As the complexity of human-machine systems grows rapidly, there is an increasing need for human factors theories and computational methods that can quantitatively model and simulate human performance and mental workload in complex multi-task scenarios. In response to this need, I have developed and evaluated an integrated cognitive architecture named QN-ACTR, which integrates two previously isolated but complementary cognitive architectures – Queueing Network (QN) and Adaptive Control of Thought-Rational (ACT-R). Combining their advantages and overcoming the limitations of each method, QN-ACTR possesses the benefits of modeling a wider range of tasks including multi-tasks with complex cognitive activities that existing methods have difficulty to model. These benefits have been evaluated and demonstrated by comparing model results with human results in the simulation of multi-task scenarios including skilled transcription typing and reading comprehension (human-computer interaction), medical decision making with concurrent tasks (healthcare), and driving with a secondary speech comprehension task (transportation), all of which contain important and practical human factors issues. QN-ACTR models produced performance and mental workload results similar to the human results. To support industrial applications of QN-ACTR, I have also developed the usability features of QN-ACTR to facilitate the use of this cognitive engineering tool by industrial and human factors engineers. Future research can apply QN-ACTR – which is a generic computational modeling theory and method – to other domains with important human factors issues.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/102477/1/shicao_1.pd

    Queuing Network Modeling of Human Multitask Performance and its Application to Usability Testing of In-Vehicle Infotainment Systems.

    Full text link
    Human performance of a primary continuous task (e.g., steering a vehicle) and a secondary discrete task (e.g., tuning radio stations) simultaneously is a common scenario in many domains. It is of great importance to have a good understanding of the mechanisms of human multitasking behavior in order to design the task environments and user interfaces (UIs) that facilitate human performance and minimize potential safety hazards. In this dissertation I investigated and modeled human multitask performance with a vehicle-steering task and several typical in-vehicle secondary tasks. Two experiments were conducted to investigate how various display designs and control modules affect the driver's eye glance behavior and performance. A computational model based on the cognitive architecture of Queuing Network-Model Human Processor (QN-MHP) was built to account for the experiment findings. In contrast to most existing studies that focus on visual search in single task situations, this dissertation employed experimental work that investigates visual search in multitask situations. A modeling mechanism for flexible task activation (rather than strict serial activations) was developed to allow the activation of a task component to be based on the completion status of other task components. A task switching scheme was built to model the time-sharing nature of multitasking. These extensions offer new theoretical insights into visual search in multitask situations and enable the model to simulate parallel processing both within one task and among multiple tasks. The validation results show that the model could account for the observed performance differences from the empirical data. Based on this model, a computer-aided engineering toolkit was developed that allows the UI designers to make quantitative prediction of the usability of design concepts and prototypes. Scientifically, the results of this dissertation research offer additional insights into the mechanisms of human multitask performance. From the engineering application and practical value perspective, the new modeling mechanism and the new toolkit have advantages over the traditional usability testing methods with human subjects by enabling the UI designers to explore a larger design space and address usability issues at the early design stages with lower cost both in time and manpower.PHDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113590/1/fredfeng_1.pd

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Predicting drivers' direction sign reading reaction time using an integrated cognitive architecture

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Drivers' reaction time of reading signs on expressways is a fundamental component of sight distance design requirements, and reaction time is affected by many factors such as information volume and concurrent tasks. We built cognitive simulation models to predict drivers' direction sign reading reaction time. Models were built using the queueing network-adaptive control of thought rational (QN-ACTR) cognitive architecture. Drivers' task-specific knowledge and skills were programmed as production rules. Two assumptions about drivers' strategies were proposed and tested. The models were connected to a driving simulator program to produce prediction of reaction time. Model results were compared to human results in sign reading single-task and reading while driving dual-task conditions. The models were built using existing modelling methods without adjusting any parameter to fit the human data. The models' prediction was similar to the human data and could capture the different reaction time in different task conditions with different numbers of road names on the direction signs. Root mean square error (RMSE) was 0.3 s, and mean absolute percentage error (MAPE) was 12%. The results demonstrated the models' predictive power. The models provide a useful tool for the prediction of driver performance and the evaluation of direction sign design.The research was supported by National Natural Science Foundation of China (51678460, U1664262); Open Project of Key Laboratory of Ministry of Public Security for Road Traffic Safety (2017ZDSYSKFKT02); Natural Science Foundation of Hubei Province, China (ZRMS2017001571); Wuhan Youth Science and Technology Plan (2017050304010268); Fundamental Research Funds for the Central Universities (2017-JL-003). This work was supported in part by Natural Sciences and Engineering Research Council of Canada Discovery Grant RGPIN-2015-04134 (to SC)

    Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    Get PDF
    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations

    Discrete Event Simulation of Distributed Team Communication Architecture

    Get PDF
    As the United States Department of Defense continues to increase the number of Remotely Piloted Aircraft (RPA) operations overseas, improved Human Systems Integration becomes increasingly important. RPA systems rely heavily on distributed team communications determined by systems architecture. Two studies examine the effects of systems architecture on operator workload of US Air Force MQ-1/9 operators. The first study ascertains the effects of communication modality changes on mental workload using the Improved Research Integration Pro (IMPRINT) software tool to estimate pilot workload. Allocation of communication between modalities minimizes workload. The second study uses IMPRINT to model Mission Intelligence Controllers (MICs) and the effect of the system architecture upon them. Four system configurations were simulated for four mission activity levels. Mental workload, monitoring time and the number of delayed tasks were estimated to determine the effect of changing system architecture parameters. Literature and MIC interviews provided parameters for the model. The analysis demonstrates that the proposed changes have significant effects which, in some conditions, bring the overall workload function toward a proposed theoretical optimum

    Modeling Dual-Task Concurrency and Effort in QN-ACTR and IMPRINT.

    Full text link
    Computational cognitive models have wide ranging applications from reducing the time and cost of task and interface analyses to the discovery of new human cognitive phenomena. We investigate the use and limitations of IMPRINT, a task network simulation tool, and develop an extension to improve the modeling of task component execution limits in multi-task performance under high workload. The extension is implemented as a Soar agent that moderates task execution akin to executive processes in EPIC. We show that an IMPRINT model of a UAV operation task with the extension exhibits qualitatively distinct workload management strategies also observed in human performance of the same task. Next, we develop QN-ACTR models of a concurrent addition and targeting task and collect empirical data of human performance on the tasks to validate the models' predictions of execution time and a time sharing concurrency metric. We also use the empirical data to validate an IMPRINT model of the addition and targeting tasks. Both QN-ACTR and IMPRINT models capture the primary effects of variable task difficulty parameters on execution time and concurrency. Model inaccuracy at the subtask level provides evidence for the use of visual spatial memory during complex addition. In a second experiment with similar tasks, we introduce an incentive to examine the effects of effort on execution time and concurrency in dual task performance. Incentive induced effort is found to increase performance on the rewarded dimension without an increase in the time sharing concurrency metric, suggesting that the performance improvements are not derived from an increase in task scheduling efficiency or resource sharing but from the same improvements found in single task conditions. The QN-ACTR task models are modified to account for the increased effort by adjusting base level parameters and are validated with the empirical data.PhDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/102330/1/cjbest_1.pd

    Quantifying Cognitive Efficiency of Display in Human-Machine Systems

    Get PDF
    As a side effect of fast growing informational technology, information overload becomes prevalent in the operation of many human-machine systems. Overwhelming information can degrade operational performance because it imposes large mental workload on human operators. One way to address this issue is to improve the cognitive efficiency of display. A cognitively efficient display should be more informative while demanding less mental resources so that an operator can process larger displayed information using their limited working memory and achieve better performance. In order to quantitatively evaluate this display property, a Cognitive Efficiency (CE) metric is formulated as the ratio of the measures of two dimensions: display informativeness and required mental resources (each dimension can be affected by display, human, and contextual factors). The first segment of the dissertation discusses the available measurement techniques to construct the CE metric and initially validates the CE metric with basic discrete displays. The second segment demonstrates that displays showing higher cognitive efficiency improve multitask performance. This part also identifies the version of the CE metric that is the most predictive of multitask performance. The last segment of the dissertation applies the CE metric in driving scenarios to evaluate novel speedometer displays; however, it finds that the most efficient display may not better enhance concurrent tracking performance in driving. Although the findings of dissertation show several limitations, they provide valuable insight into the complicated relationship among display, human cognition, and multitask performance in human-machine systems
    • …
    corecore