9,972 research outputs found

    Evaluating Scenarios That Can Startle and Surprise Pilots

    Get PDF
    Startle and surprise on the flight deck is a contributing factor in multiple aviation accidents that have been recognized by multiple aviation safety boards. This study identified the effects startle and surprise had on commercial pilots with single and multiengine ratings. Surprise is defined here as something unexpected (e.g., engine failure), while startle is the associated exaggerated effect of an unexpected condition (e.g., thunder sound). Forty pilots were tested in a basic aviation training device configured to a Cessna 172 (single-engine) and a Baron 58 (multi-engine). Each pilot flew the single- and multiengine aircraft in a scenario that induced an uninformed surprise emergency condition, uninformed surprise and startle emergency condition, and an informed emergency condition. During each condition, heart and respiration rate, flight performance, and subjective workload measures were collected. The startle and surprise condition showed the highest heart and respiration rates for both aircraft. However, there was no difference in either the heart or respiration rates between the two aircraft for the informed condition. The subjective measures of mental, physical, and temporal demands, effort, and frustration were higher for the twin-engine aircraft when compared to the single-engine aircraft for all conditions. Performance (subjective) was not different between the single- and multi-engine aircraft for the surprise condition only. Objective flight performance, which was evaluated as a) participants’ adherence to the engine failure checklist steps for single-engine aircraft; and b) altitude deviation for multi-engine aircraft, showed that pilots performed better in the informed emergency condition. Startle and surprise can be measured using heart and respiration rate as physiological markers, which can be used to evaluate if different flight simulator scenarios are startling, surprising, or neither. Potential applications of this study will help develop flight simulator scenarios for various unexpected conditions of different aircraft. Results of this study can potentially help pave the way for federal regulations that require training for startle and surprise

    Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

    Full text link
    Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simulator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AV's action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table

    TASK HANDOFF BETWEEN HUMANS AND AUTOMATION

    Get PDF
    The Department of Defense (DOD) seeks to incorporate human-automation teaming to decrease human operators’ cognitive workload, especially in the context of future vertical lift (FVL). Researchers created a “wizard of oz” study to observe human behavior changes as task difficulty and levels of automation increased. The platform used for the study was a firefighting strategy software game called C3Fire. Participants were paired with a confederate acting as an automated agent to observe the participant’s behavior in a human-automation team. The independent variables were automation level (within; low, medium, high) and queuing (between; uncued, cued). The dependent variables were the number of messages transmitted to the confederate, the number of tasks embedded in those messages (tasks handed off), and the participant’s self-reported cognitive workload score. The study results indicated that as the confederate increased its scripted level of automation, the number of tasks handed off to automation increased. However, the number of messages transmitted to automation and the subjective cognitive workload remained the same. The study’s findings suggest that while human operators were able to bundle tasks, cognitive workload remained relatively unchanged. The results imply that the automation level may have less impact on cognitive workload than anticipated.Major, United States ArmyCaptain, United States ArmyCaptain, United States ArmyCaptain, United States ArmyCaptain, United States ArmyApproved for public release. Distribution is unlimited

    Mitigating User Frustration through Adaptive Feedback based on Human-Automation Etiquette Strategies

    Get PDF
    The objective of this study is to investigate the effects of feedback and user frustration in human-computer interaction (HCI) and examine how to mitigate user frustration through feedback based on human-automation etiquette strategies. User frustration in HCI indicates a negative feeling that occurs when efforts to achieve a goal are impeded. User frustration impacts not only the communication with the computer itself, but also productivity, learning, and cognitive workload. Affect-aware systems have been studied to recognize user emotions and respond in different ways. Affect-aware systems need to be adaptive systems that change their behavior depending on users’ emotions. Adaptive systems have four categories of adaptations. Previous research has focused on primarily function allocation and to a lesser extent information content and task scheduling. However, the fourth approach, changing the interaction styles is the least explored because of the interplay of human factors considerations. Three interlinked studies were conducted to investigate the consequences of user frustration and explore mitigation techniques. Study 1 showed that delayed feedback from the system led to higher user frustration, anger, cognitive workload, and physiological arousal. In addition, delayed feedback decreased task performance and system usability in a human-robot interaction (HRI) context. Study 2 evaluated a possible approach of mitigating user frustration by applying human-human etiquette strategies in a tutoring context. The results of Study 2 showed that changing etiquette strategies led to changes in performance, motivation, confidence, and satisfaction. The most effective etiquette strategies changed when users were frustrated. Based on these results, an adaptive tutoring system prototype was developed and evaluated in Study 3. By utilizing a rule set derived from Study 2, the tutor was able to use different automation etiquette strategies to target and improve motivation, confidence, satisfaction, and performance using different strategies, under different levels of user frustration. This work establishes that changing the interaction style alone of a computer tutor can affect a user’s motivation, confidence, satisfaction, and performance. Furthermore, the beneficial effect of changing etiquette strategies is greater when users are frustrated. This work provides a basis for future work to develop affect-aware adaptive systems to mitigate user frustration

    When Robots Enter Our Workplace: Understanding Employee Trust in Assistive Robots

    Get PDF
    This study is about assistive robots as internal service provider within the company Merck KGaA and examines how the physical appearance of a service representative (humanoid robot, android robot, human) affects employees’ trust. Based on the uncanny valley paradigm, we argue that employees’ trust is the lowest for the android robot and the highest for the human. Further, we will examine the effects of task complexity and requirements for self-disclosure on employees’ trust in assistive robots. According to script theory and media equation theory, we propose that high task complexity and high requirements for self-disclosure increase employees’ trust. We developed a research design to test our model by deploying a humanoid robot and an android robot within a company as robotic assistants in comparison to a human employee. In a next step, we will run a corresponding study with 300 employees

    Selecting Metrics to Evaluate Human Supervisory Control Applications

    Get PDF
    The goal of this research is to develop a methodology to select supervisory control metrics. This methodology is based on cost-benefit analyses and generic metric classes. In the context of this research, a metric class is defined as the set of metrics that quantify a certain aspect or component of a system. Generic metric classes are developed because metrics are mission-specific, but metric classes are generalizable across different missions. Cost-benefit analyses are utilized because each metric set has advantages, limitations, and costs, thus the added value of different sets for a given context can be calculated to select the set that maximizes value and minimizes costs. This report summarizes the findings of the first part of this research effort that has focused on developing a supervisory control metric taxonomy that defines generic metric classes and categorizes existing metrics. Future research will focus on applying cost benefit analysis methodologies to metric selection. Five main metric classes have been identified that apply to supervisory control teams composed of humans and autonomous platforms: mission effectiveness, autonomous platform behavior efficiency, human behavior efficiency, human behavior precursors, and collaborative metrics. Mission effectiveness measures how well the mission goals are achieved. Autonomous platform and human behavior efficiency measure the actions and decisions made by the humans and the automation that compose the team. Human behavior precursors measure human initial state, including certain attitudes and cognitive constructs that can be the cause of and drive a given behavior. Collaborative metrics address three different aspects of collaboration: collaboration between the human and the autonomous platform he is controlling, collaboration among humans that compose the team, and autonomous collaboration among platforms. These five metric classes have been populated with metrics and measuring techniques from the existing literature. Which specific metrics should be used to evaluate a system will depend on many factors, but as a rule-of-thumb, we propose that at a minimum, one metric from each class should be used to provide a multi-dimensional assessment of the human-automation team. To determine what the impact on our research has been by not following such a principled approach, we evaluated recent large-scale supervisory control experiments conducted in the MIT Humans and Automation Laboratory. The results show that prior to adapting this metric classification approach, we were fairly consistent in measuring mission effectiveness and human behavior through such metrics as reaction times and decision accuracies. However, despite our supervisory control focus, we were remiss in gathering attention allocation metrics and collaboration metrics, and we often gathered too many correlated metrics that were redundant and wasteful. This meta-analysis of our experimental shortcomings reflect those in the general research population in that we tended to gravitate to popular metrics that are relatively easy to gather, without a clear understanding of exactly what aspect of the systems we were measuring and how the various metrics informed an overall research question

    Evaluation of Etiquette Strategies to Adapt Feedback In Affect-Aware Tutoring

    Full text link
    The purpose of this research is to investigate how to mitigate user frustration and improve task performancein the context of human-computer interaction (HCI). Even though user frustration plays a role in manyaspects of HCI and studies have looked into the consequences of frustration in various fields, the ways tomitigate frustration are less deeply examined. Once the system has the ability to understand and includeuser emotions as factors in HCI, the interaction between the user and the computer system could be adaptedif the computers are able to modify its behavior with users in appropriate ways to further joint performance.Specifically, a preliminary study was conducted to explore the task performance, motivation, andconfidence implications of changing the interaction between the human and the computer via differentetiquette strategies. Participants solved a total of twenty mathematics problems under different frustrationcondition with feedback given in different styles of etiquette. Changing etiquette strategies in tutoring ledto changes in performance, motivation, and confidence. The most effective etiquette strategies changedwhen users were frustrated. This work provides the foundation for the design of adaptive intelligenttutoring system based on etiquette strategies.Copyright Human Factors and Ergonomics Society 2016. Posted with permission.</div

    What we can and cannot (yet) do with functional near infrared spectroscopy

    Get PDF
    Functional near infrared spectroscopy (NIRS) is a relatively new technique complimentary to EEG for the development of brain-computer interfaces (BCIs). NIRS-based systems for detecting various cognitive and affective states such as mental and emotional stress have already been demonstrated in a range of adaptive human–computer interaction (HCI) applications. However, before NIRS-BCIs can be used reliably in realistic HCI settings, substantial challenges oncerning signal processing and modeling must be addressed. Although many of those challenges have been identified previously, the solutions to overcome them remain scant. In this paper, we first review what can be currently done with NIRS, specifically, NIRS-based approaches to measuring cognitive and affective user states as well as demonstrations of passive NIRS-BCIs. We then discuss some of the primary challenges these systems would face if deployed in more realistic settings, including detection latencies and motion artifacts. Lastly, we investigate the effects of some of these challenges on signal reliability via a quantitative comparison of three NIRS models. The hope is that this paper will actively engage researchers to acilitate the advancement of NIRS as a more robust and useful tool to the BCI community

    Interactions Between Humans, Virtual Agent Characters and Virtual Avatars

    Get PDF
    Simulations allow people to experience events as if they were happening in the real world in a way that is safer and less expensive than live training. Despite improvements in realism in simulated environments, one area that still presents a challenge is interpersonal interactions. The subtleties of what makes an interaction rich are difficult to define. We may never fully understand the complexity of human interchanges, however there is value in building on existing research into how individuals react to virtual characters to inform future investments. Virtual characters can either be automated through computational processes, referred to as agents, or controlled by a human, referred to as an avatar. Knowledge of interactions with virtual characters will facilitate the building of simulated characters that support training tasks in a manner that will appropriately engage learners. Ultimately, the goal is to understand what might cause people to engage or disengage with virtual characters. To answer that question, it is important to establish metrics that would indicate when people believe their interaction partner is real, or has agency. This study makes use of three types of measures: objective, behavioral and self-report. The objective measures were neural, galvanic skin response, and heart rate measures. The behavioral measure was gestures and facial expressions. Surveys provided an opportunity to gain self-report data. The objective of this research study was to determine what metrics could be used during social interactions to achieve the sense of agency in an interactive partner. The results provide valuable feedback on how users need to see and be seen by their interaction partner to ensure non-verbal cues provide context and additional meaning to the dialog. This study provides insight into areas of future research, offering a foundation of knowledge for further exploration and lessons learned. This can lead to more realistic experiences that open the door to human dimension training
    • …
    corecore