436 research outputs found
Safety Culture, Training, Understanding, Aviation Passion: The Impact on Manual Flight and Operational Performance
The objective of this study was to understand pilots’ proclivity toward automation usage by identifying the relationship among pilot training, aircraft and systems understanding, safety culture, manual flight behavior, and aviation passion. A survey instrument titled Manual Flight Inventory (MFI) was designed to gather and assess self-reported variables of manual flight behavior, aviation passion, safety culture perception, pilot training, and pilot understanding. Demographic data and automation opinion-based questions were also asked to fully understand pilots’ thoughts on automation, safety culture, policies, procedures, training methodologies and assessment measures, levels of understanding, and study techniques. Exploratory Factor Analysis (EFA) was utilized to identify underlying factors from the data, followed by confirmatory factor analysis (CFA) to confirm the factor structure. Structural Equation Modeling (SEM) was utilized to test the relationships between the variables. All hypotheses were significant; however, four of the thirteen hypotheses were not supported due to a negative relationship. The significant predictors of manual flight were identified to be pilot understanding, pilot training, aviation passion, and safety culture. Pilots’ understanding of the aircraft operating systems was determined to have the greatest influence over a pilot’s decision to manually fly. Aviation passion was identified as the second largest influencing factor. Pilot training had the greatest influence over pilot understanding, and safety culture presented the greatest influence over pilot training. Results identified that safety culture was negatively impacting pilot training, and pilot training had a negative influence over pilots’ decision to manually fly. The contributions of this research have identified the significance of safety culture as associated with Safety Management Systems (SMS) as an influencing factor over pilot training and resultant operational performance. Pilot understanding is a direct result of pilot training, and current training practices are negatively influencing the decision for manual flight. Therefore, a solution to the industry problem—operational confusion (understanding), as well as guidance versus control (Abbott, 2015), and the lack of hand flying skills and monitoring ability (OIG, 2016)—can now be addressed by improving training practices. Future research and recommendations were provided
Recommended from our members
Automation bias and prescribing decision support – rates, mediators and mitigators
Purpose: Computerised clinical decision support systems (CDSS) are implemented within healthcare settings as a method to improve clinical decision quality, safety and effectiveness, and ultimately patient outcomes. Though CDSSs tend to improve practitioner performance and clinical outcomes, relatively little is known about specific impact of inaccurate CDSS output on clinicians. Although there is high heterogeneity between CDSS types and studies, reviews of the ability of CDSS to prevent medication errors through incorrect decisions have generally been consistently positive, working by improving clinical judgement and decision making. However, it is known that the occasional incorrect advice given may tempt users to reverse a correct decision, and thus introduce new errors. These systematic errors can stem from Automation Bias (AB), an effect which has had little investigation within the healthcare field, where users have a tendency to use automated advice heuristically.
Research is required to assess the rate of AB, identify factors and situations involved in overreliance and propose says to mitigate risk and refine the appropriate usage of CDSS; this can provide information to promote awareness of the effect, and ensure the maximisation of the impact of benefits gained from the implementation of CDSS.
Background: A broader literature review was carried out coupled with a systematic review of studies investigating the impact of automated decision support on user decisions over various clinical and non-clinical domains. This aimed to identify gaps in the literature and build an evidence-based model of reliance on Decision Support Systems (DSS), particularly a bias towards over-using automation. The literature review and systematic review revealed a number of postulates - that CDSS are socio-technical systems, and that factors involved in CDSS misuse can vary from overarching social or cultural factors, individual cognitive variables to more specific technology design issues. However, the systematic review revealed there is a paucity of deliberate empirical evidence for this effect.
The reviews identified the variables involved in automation bias to develop a conceptual model of overreliance, the initial development of an ontology for AB, and ultimately inform an empirical study to investigate persuasive potential factors involved: task difficulty, time pressure, CDSS trust, decision confidence, CDSS experience and clinical experience. The domain of primary care prescribing was chosen within which to carry out an empirical study, due to the evidence supporting CDSS usefulness in prescribing, and the high rate of prescribing error.
Empirical Study Methodology: Twenty simulated prescribing scenarios with associated correct and incorrect answers were developed and validated by prescribing experts. An online Clinical Decision Support Simulator was used to display scenarios to users. NHS General Practitioners (GPs) were contacted via emails through associates of the Centre for Health Informatics, and through a healthcare mailing list company.
Twenty-six GPs participated in the empirical study. The study was designed so each participant viewed and gave prescriptions for 20 prescribing scenarios, 10 coded as “hard” and 10 coded as “medium” prescribing scenarios (N = 520 prescribing cases were answered overall). Scenarios were accompanied by correct advice 70% of the time, and incorrect advice 30% of the time (in equal proportions in either task difficulty condition). Both the order of scenario presentation and the correct/incorrect nature of advice were randomised to prevent order effects.
The planned time pressure condition was dropped due to low response rate.
Results: To compare with previous literature which took overall decisions into account, taking individual cases into account (N=520), the pre advice accuracy rate of the clinicians was 50.4%, which improved to 58.3% post advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of AB, as measured by decision switches from correct pre advice, to incorrect post advice was 5.2% of all cases at a CDSS accuracy rate of 70% - leading to a net improvement of 8%.
However, the above by-case type of analysis may not enable generalisation of results (but illustrates rates in this specific situation); individual participant differences must be taken into account. By participant (N = 26) when advice was correct, decisions were more likely to be switched to a correct prescription, when advice was incorrect decisions were more likely to be switched to an incorrect prescription.
There was a significant correlation between decision switching and AB error.
By participant, more immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching (but not higher AB rate). The rate of AB was somewhat problematic to analyse due to low number of instances – the effect could potentially have been greater. The between subjects effect of time pressure could not be investigated due to low response rate.
Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching.
Conclusion: There is a gap in the current literature investigating inappropriate CDSS use, but the general literature supports an interactive multi-factorial aetiology for automation misuse. Automation bias is a consistent effect with various potential direct and indirect causal factors. It may be mitigated by altering advice characteristics to aid clinicians’ awareness of advice correctness and support their own informed judgement – this needs further empirical investigation. Users’ own clinical judgement must always be maintained, and systems should not be followed unquestioningly
Application of neuroergonomics in the industrial design of mining equipment.
Neuroergonomics is an interdisciplinary field merging neuroscience and ergonomics to optimize performance. In order to design an optimal user interface, we must understand the cognitive processing involved. Traditional methodology incorporates self-assessment from the user. This dissertation examines the use of neurophysiological techniques in quantifying the cognitive processing involved in allocating cognitive resources. Attentional resources, cognitive processing, memory and visual scanning are examined to test the ecological validity of theoretical laboratory settings and how they translate to real life settings. By incorporating a non-invasive measurement technique, such as the quantitative electroencephalogram (QEEG), we are able to examine connectivity patterns in the brain during operation and discern whether or not a user has obtained expert status. Understanding the activation patterns during each phase of design will allow us to gauge whether our design has balanced the cognitive requirements of the user.Doctor of Philosophy (PhD) in Natural Resources
Engineerin
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 324)
This bibliography lists 200 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during May, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
The Effects of Automation Expertise, System Confidence, and Image Quality on Trust, Compliance, and Performance
This study examined the effects of automation expertise, system confidence, and image quality on automation trust, compliance, and detection performance. One hundred and fifteen participants completed a simulated military target detection task while receiving advice from an imperfect diagnostic aid that varied in expertise (expert vs. novice) and confidence (75% vs. 50% vs. 25% vs. no aid). The task required participants to detect covert enemy targets in simulated synthetic aperture radar (SAR) images. Participants reported whether a target was present or absent, their decision-confidence, and their trust in the diagnostic system\u27s advice. Results indicated that system confidence and automation expertise influenced automation trust, compliance, and measures of detection performance, particularly when image quality was poor. Results also highlighted several incurred costs of system confidence and automation expertise. Participants were more apt to generate false alarms as system confidence increased and when receiving diagnostic advice from the expert system. Data also suggest participants adopted an analogical trust tuning strategy rather than an analytical strategy when evaluating system confidence ratings. This resulted in inappropriate trust when system confidence was low. Theoretical and practical implications regarding the effects of system confidence and automation expertise on automation trust and the design of diagnostic automation are discussed
Human performance in agile production systems : a longitudinal study in system outcomes, human cognition, and quality of work life.
This dissertation examines a research objective associated with human performance in agile production systems, with specific attention towards the hypothesis that system outcomes are the causal result of worker human cognition and quality of work life attributes experienced in an agile production system. The development and adoption of world class agile production systems has been an immediate economic answer to the world-wide competitive call for more efficient, more cost-effective, and more quality laden production processes, but has the human element of these processes been fully understood and optimized? Outstanding current literature suggests that the recent movements toward higher standards in systems outcomes (i.e. increased quality, decreased costs, improved delivery schedules, etc) has not been truly evaluated. The human-machine interaction has not been fully comprehended, not to mention quantified; the role of human cognition is still under evaluation; and the coupling of the entire production system with respect to the human quality of life has yielded conflicting messages. The dissertation research conducted a longitudinal study to evaluate the interrelationships occurring between system outcomes, applicable elements of human cognition, and the quality of work life issues associated with the human performance in agile production systems. A structural equation modeling analysis aided the evaluation of the hypotheses of the dissertation by synthesizing the three specific instruments measuring the appropriate latent variables: 1. system outcomes – empirical data, 2. human cognition – cognitive task analysis, and 3. quality of work life – questionnaires into a single hypothesized model. These instruments were administered in four (4) waves during the eight month longitudinal study. The study latent variables of system outcomes, human cognition, and quality of work life were shown to be quantifiable and causal in nature. System outcomes were indicated to be a causal result of the combined, yet uncorrelated, effect of human cognition and quality of work life attributes experienced by workers in agile production systems. In addition, this latent variable relationship is situational, varying in regards to the context of, but not necessarily the time exposed to, the particular task the worker is involved with. An implication of this study is that the quality of work life attributes are long-term determinants of human performance, whereas human cognition attributes are immediate, activity based determinants of human performance in agile production systems
The Effects Of Modulating Accommodative-Vergence Stress Within The Context Of Operator Performance On Automated System Tasks
Automated systems (e.g., self-driving cars, autopilot) can reduce an operator’s (i.e., driver, pilot, baggage screener) task engagement, which can result in mind wandering, distraction, and loss of concentration. Consequently, unfavorable performance outcomes, such as missed critical signals and slow responses to emergency events, can occur. Because automation reverts the operator to a “visual monitoring” role, the oculomotor accommodative-vergence responses (the oculomotor responses that maintain a single focused image on the retina) may play a vital role in human-automation interactions. Prior research has shown that individuals with deficits in the accommodative-vergence responses can exhibit inattentive symptoms (e.g., poor concentration) characteristic of attention-deficit/hyperactivity disorder (ADHD) while performing prolonged close work (e.g., reading). Given the behavioral symptoms present in those experiencing accommodative-vergence stress, automated systems may exacerbate these negative effects. The current study examined the effects of accommodative-vergence stress in combination with automation on aspects of operator task engagement. Participants (N = 95) under accommodative-vergence stress wearing -2.0 diopter lenses or normal viewing conditions completed a 40 min flight simulation task either with or without automation. Physiological dependent measures included electroencephalographic (EEG) parietal-occipital alpha power spectral density (PSD), an EEG multivariate metric of engagement, and pupil diameter. Self-report measures of task engagement, cognitive fatigue, and visual fatigue symptoms were also collected along with oculomotor measurements (accommodation and convergence) and flight simulation task performance. Multivariate analyses indicated that the application of -2.0 diopter lenses did not significantly alter oculomotor measurements or subjective reports of visual fatigue. Oculomotor stress modestly affected task performance and tended to result in increased EEG measures of engagement, while subsequently increasing feelings of fatigue, potentially indicating a compensatory effort response. Participants performing the simulation with automation exhibited significantly lower task engagement, as indicated by greater parietal-occipital alpha PSD, less multivariate EEG engagement, smaller pupil diameter, and lower self-reported engagement. Overall, oculomotor stress and automation did not interact synergistically to affect task engagement and associated performance outcomes. Automation and time on task were the main determinants of task engagement. These results underscore the negative effects automation can have on underlying operator cognitive states and the associated need to carefully design automation to combat reduced task engagement. Applications for system design and the use of EEG in augmented cognition systems involving automation are discussed
The development of a human-robot interface for industrial collaborative system
Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future.
A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”.
In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance.
The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user
ii
effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot
- …