15,098 research outputs found
Does A Loss of Social Credibility Impact Robot Safety?
This position paper discusses the safety-related functions performed by assistive robots and explores the relationship between trust and effective safety risk mitigation. We identify a measure of the robotâs social effectiveness, termed social credibility, and present a discussion of how social credibility may be gained and lost. This paperâs contribution is the identification of a link between social credibility and safety-related performance. Accordingly, we draw on analyses of existing systems to demonstrate how an assistive robotâs safety-critical functionality can be impaired by a loss of social credibility. In addition, we present a discussion of some of the consequences of prioritising either safety-related functionality or social engagement. We propose the identification of a mixed-criticality scheduling algorithm in order to maximise both safety-related performance and social engagement
Analysis of Human and Agent Characteristics on Human-Agent Team Performance and Trust
The human-agent team represents a new construct in how the United States Department of Defense is orchestrating mission planning and mission accomplishment. In order for mission planning and accomplishment to be successful, several requirements must be met: a firm understanding of human trust in automated agents, how human and automated agent characteristics influence human-agent team performance, and how humans behave. This thesis applies a combination of modeling techniques and human experimentation to understand the concepts aforementioned. The modeling techniques used include static modeling in SysML activity diagrams and dynamic modeling of both human and agent behavior in IMPRINT. Additionally, this research consisted of human experimentation in a dynamic, event-driven, teaming environment known as Space Navigator. Both the modeling and the experimenting depict that the agent\u27s reliability has a significant effect upon the human-agent team performance. Additionally, this research found that the age, gender, and education level of the human user has a relationship with the perceived trust the user has in the agent. Finally, it was found that patterns of compliant human behavior, archetypes, can be created to classify human users
A proposed psychological model of driving automation
This paper considers psychological variables pertinent to driver automation. It is anticipated that driving with automated systems is likely to have a major impact on the drivers and a multiplicity of factors needs to be taken into account. A systems analysis of the driver, vehicle and automation served as the basis for eliciting psychological factors. The main variables to be considered were: feed-back, locus of control, mental workload, driver stress, situational awareness and mental representations. It is expected that anticipating the effects on the driver brought about by vehicle automation could lead to improved design strategies. Based on research evidence in the literature, the psychological factors were assembled into a model for further investigation
The Effects of Age and Working Memory Demands on Automation-Induced Complacency
Complacency refers to a type of automation use expressed as insufficient monitoring and verification of automated functions. Previous studies have attempted to identify the age-related factors that influence complacency during interaction with automation. However, little is known about the role of age-related differences in working memory capacity and its connection to complacent behaviors. The current study examined whether working memory demand of an automated task and age-related differences in cognitive ability influence complacency. Working memory demand was manipulated in the task with two degrees of automation (i.e., information and decision). A younger and older age group was included to observe the effects of differences in working memory capacity on performance in a targeting task using an automated aid. The results of the study show that younger and older adults did not significantly differ in complacent behavior for information or decision automation. Also, individual differences in working memory capacity did not predict complacency in the automated task. However, these findings do not disprove the role of working memory in automation-induced complacency. Both age groups were more complacent with automation that had less working memory demand. Our findings suggest systems that utilize both higher and lower degrees of automation could limit overdependence. These results provide implications for the design of automated interfaces
Impact of Information and Communication Technology (ICT) on construction projects
The changing face of construction projects has resulted in a movement towards the use of technology as a primary means of communication. The consequences of this rise in the use of information and communication technology (ICT) is a loss of interpersonal communication skills. A number of resulting issues within the human â electronic and human â human interfaces are identified in an attempt to define the efficiency of communication in projects. The research shows how ICT effects the social environment of construction project teams and the project outcome. The study seeks to confirm the need for further work in order to develop new forms of communication protocols and behaviour. An initial literature review was undertaken to develop a theoretical review of the impacts of ICT on construction project teams. This review identified a number of issues that were then tested in the field through an observation and two verification interviews. The research confirms the existence of tensions and conflicts in the human â electronic and human - human communication interfaces within the studies environment. It is proposed that the increasing use of ICT occur at the expense of soft system communication. The principal impact of this is a form of âhuman distractionâ which adversely affects the performance of project teams. There is limited theory exploring these issues that suggests the problems identified are not well understood and consequently indicates a gap in knowledge
Recommended from our members
Automation bias and prescribing decision support â rates, mediators and mitigators
Purpose: Computerised clinical decision support systems (CDSS) are implemented within healthcare settings as a method to improve clinical decision quality, safety and effectiveness, and ultimately patient outcomes. Though CDSSs tend to improve practitioner performance and clinical outcomes, relatively little is known about specific impact of inaccurate CDSS output on clinicians. Although there is high heterogeneity between CDSS types and studies, reviews of the ability of CDSS to prevent medication errors through incorrect decisions have generally been consistently positive, working by improving clinical judgement and decision making. However, it is known that the occasional incorrect advice given may tempt users to reverse a correct decision, and thus introduce new errors. These systematic errors can stem from Automation Bias (AB), an effect which has had little investigation within the healthcare field, where users have a tendency to use automated advice heuristically.
Research is required to assess the rate of AB, identify factors and situations involved in overreliance and propose says to mitigate risk and refine the appropriate usage of CDSS; this can provide information to promote awareness of the effect, and ensure the maximisation of the impact of benefits gained from the implementation of CDSS.
Background: A broader literature review was carried out coupled with a systematic review of studies investigating the impact of automated decision support on user decisions over various clinical and non-clinical domains. This aimed to identify gaps in the literature and build an evidence-based model of reliance on Decision Support Systems (DSS), particularly a bias towards over-using automation. The literature review and systematic review revealed a number of postulates - that CDSS are socio-technical systems, and that factors involved in CDSS misuse can vary from overarching social or cultural factors, individual cognitive variables to more specific technology design issues. However, the systematic review revealed there is a paucity of deliberate empirical evidence for this effect.
The reviews identified the variables involved in automation bias to develop a conceptual model of overreliance, the initial development of an ontology for AB, and ultimately inform an empirical study to investigate persuasive potential factors involved: task difficulty, time pressure, CDSS trust, decision confidence, CDSS experience and clinical experience. The domain of primary care prescribing was chosen within which to carry out an empirical study, due to the evidence supporting CDSS usefulness in prescribing, and the high rate of prescribing error.
Empirical Study Methodology: Twenty simulated prescribing scenarios with associated correct and incorrect answers were developed and validated by prescribing experts. An online Clinical Decision Support Simulator was used to display scenarios to users. NHS General Practitioners (GPs) were contacted via emails through associates of the Centre for Health Informatics, and through a healthcare mailing list company.
Twenty-six GPs participated in the empirical study. The study was designed so each participant viewed and gave prescriptions for 20 prescribing scenarios, 10 coded as âhardâ and 10 coded as âmediumâ prescribing scenarios (N = 520 prescribing cases were answered overall). Scenarios were accompanied by correct advice 70% of the time, and incorrect advice 30% of the time (in equal proportions in either task difficulty condition). Both the order of scenario presentation and the correct/incorrect nature of advice were randomised to prevent order effects.
The planned time pressure condition was dropped due to low response rate.
Results: To compare with previous literature which took overall decisions into account, taking individual cases into account (N=520), the pre advice accuracy rate of the clinicians was 50.4%, which improved to 58.3% post advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of AB, as measured by decision switches from correct pre advice, to incorrect post advice was 5.2% of all cases at a CDSS accuracy rate of 70% - leading to a net improvement of 8%.
However, the above by-case type of analysis may not enable generalisation of results (but illustrates rates in this specific situation); individual participant differences must be taken into account. By participant (N = 26) when advice was correct, decisions were more likely to be switched to a correct prescription, when advice was incorrect decisions were more likely to be switched to an incorrect prescription.
There was a significant correlation between decision switching and AB error.
By participant, more immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching (but not higher AB rate). The rate of AB was somewhat problematic to analyse due to low number of instances â the effect could potentially have been greater. The between subjects effect of time pressure could not be investigated due to low response rate.
Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching.
Conclusion: There is a gap in the current literature investigating inappropriate CDSS use, but the general literature supports an interactive multi-factorial aetiology for automation misuse. Automation bias is a consistent effect with various potential direct and indirect causal factors. It may be mitigated by altering advice characteristics to aid cliniciansâ awareness of advice correctness and support their own informed judgement â this needs further empirical investigation. Usersâ own clinical judgement must always be maintained, and systems should not be followed unquestioningly
- âŠ