32,398 research outputs found
Automation in Surgery: The Surgeons' Perspective on Human Factors Issues of Image-Guided Navigation
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Image-guided navigation (IGN) systems support the surgeon in navigating through the patients' anatomy. Previous research on IGN has focused on technical feasibility and clinical applications. Yet, as the introduction of IGN corresponds to a partial automation of the surgeon's task, well known issues of human-automation interaction might play a crucial role for the success of IGN as well. The present study represents a first attempt to assess the impact of IGN on four key issues of human automation-interaction, i.e., workload, situation awareness, trust, and skill degradation, from the surgeons' perspective. A nation-wide survey among 213 German surgeons from 94 different hospitals was conducted. Results revealed (1) a workload-shift due to IGN rather than a reduction of workload, (2) benefits of IGN with respect to situation awareness, (3) comparatively high levels of perceived reliability, trust and reliance, and (4) skill degradation as a possible risk, albeit only for inexperienced surgeons
Recommended from our members
Why Are People's Decisions Sometimes Worse with Computer Support?
In many applications of computerised decision support, a recognised source of undesired outcomes is operators' apparent over-reliance on automation. For instance, an operator may fail to react to a potentially dangerous situation because a computer fails to generate an alarm. However, the very use of terms like "over-reliance" betrays possible misunderstandings of these phenomena and their causes, which may lead to ineffective corrective action (e.g. training or procedures that do not counteract all the causes of the apparently "over-reliant" behaviour). We review relevant literature in the area of "automation bias" and describe the diverse mechanisms that may be involved in human errors when using computer support. We discuss these mechanisms, with reference to errors of omission when using "alerting systems", with the help of examples of novel counterintuitive findings we obtained from a case study in a health care application, as well as other examples from the literature
Theoretical, Measured and Subjective Responsibility in Aided Decision Making
When humans interact with intelligent systems, their causal responsibility
for outcomes becomes equivocal. We analyze the descriptive abilities of a newly
developed responsibility quantification model (ResQu) to predict actual human
responsibility and perceptions of responsibility in the interaction with
intelligent systems. In two laboratory experiments, participants performed a
classification task. They were aided by classification systems with different
capabilities. We compared the predicted theoretical responsibility values to
the actual measured responsibility participants took on and to their subjective
rankings of responsibility. The model predictions were strongly correlated with
both measured and subjective responsibility. A bias existed only when
participants with poor classification capabilities relied less-than-optimally
on a system that had superior classification capabilities and assumed
higher-than-optimal responsibility. The study implies that when humans interact
with advanced intelligent systems, with capabilities that greatly exceed their
own, their comparative causal responsibility will be small, even if formally
the human is assigned major roles. Simply putting a human into the loop does
not assure that the human will meaningfully contribute to the outcomes. The
results demonstrate the descriptive value of the ResQu model to predict
behavior and perceptions of responsibility by considering the characteristics
of the human, the intelligent system, the environment and some systematic
behavioral biases. The ResQu model is a new quantitative method that can be
used in system design and can guide policy and legal decisions regarding human
responsibility in events involving intelligent systems
Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
This paper contributes with a pragmatic evaluation framework for explainable
Machine Learning (ML) models for clinical decision support. The study revealed
a more nuanced role for ML explanation models, when these are pragmatically
embedded in the clinical context. Despite the general positive attitude of
healthcare professionals (HCPs) towards explanations as a safety and trust
mechanism, for a significant set of participants there were negative effects
associated with confirmation bias, accentuating model over-reliance and
increased effort to interact with the model. Also, contradicting one of its
main intended functions, standard explanatory models showed limited ability to
support a critical understanding of the limitations of the model. However, we
found new significant positive effects which repositions the role of
explanations within a clinical context: these include reduction of automation
bias, addressing ambiguous clinical cases (cases where HCPs were not certain
about their decision) and support of less experienced HCPs in the acquisition
of new domain knowledge.Comment: supplementary information in the main pd
ATM automation: guidance on human technology integration
© Civil Aviation Authority 2016Human interaction with technology and automation is a key area of interest to industry and safety regulators alike. In February 2014, a joint CAA/industry workshop considered perspectives on present and future implementation of advanced automated systems. The conclusion was that whilst no additional regulation was necessary, guidance material for industry and regulators was required. Development of this guidance document was completed in 2015 by a working group consisting of CAA, UK industry, academia and industry associations (see Appendix B). This enabled a collaborative approach to be taken, and for regulatory, industry, and workforce perspectives to be collectively considered and addressed. The processes used in developing this guidance included: review of the themes identified from the February 2014 CAA/industry workshop1; review of academic papers, textbooks on automation, incidents and accidents involving automation; identification of key safety issues associated with automated systems; analysis of current and emerging ATM regulatory requirements and guidance material; presentation of emerging findings for critical review at UK and European aviation safety conferences. In December 2015, a workshop of senior management from project partner organisations reviewed the findings and proposals. EASA were briefed on the project before its commencement, and Eurocontrol contributed through membership of the Working Group.Final Published versio
- …