14,335 research outputs found

    Predicting Misuse and Disuse of Combat Identification Systems

    Get PDF
    Two combat identification systems have been designed to reduce fratricide by providing soldiers with the ability to "interrogate" a potential target by sending a microwave or laser signal that, if returned, identifies the target as a "friend." Ideally, gunners will appropriately rely on these automated aids, which will reduce fratricide rates. However, past research has found that human operators underutilize (disuse) and overly rely on (misuse) automated systems (cf. Parasuraman & Riley, 1997). The purpose of this laboratory study was to simultaneously examine misuse and disuse of an automated decision-making aid at varying levels of reliability. With or without the aid of an automated system that is correct about 90%, 75%, or 60% of the time, 91 college students viewed 226 slides of Fort Sill terrain and indicated the presence or absence of camouflaged soldiers. Regardless of the reliability of the automated aid, misuse was more prevalent than disuse

    Effects of Human–Machine Competition on Intent Errors in a Target Detection Task

    Get PDF
    Objective: This investigation examined the impact of human–machine competition (John Henry effects) on intent errors. John Henry effects, expressed as an unwilling- ness to use automation, were hypothesized to increase as a function of operators’ per- sonal investment in unaided performance. Background: Misuse and disuse often occur because operators (a) cannot determine if automation or a nonautomated alternative maximizes the likelihood of task success (appraisal errors) or (b) know the utilities of the options but disregard this information when deciding to use or not to use automation (intent errors). Although appraisal errors have been extensively studied, there is a paucity of information regarding the causes and prevention of intent errors. Methods: Operators were told how many errors they and an automated device made on a target detection task. Self-reliant operators (high personal investment) could depend on their performance or automation to identify a target. Other-reliant operators (low personal investment) could rely on another person or automation. Results: As predicted, self-reliance increased dis- use and decreased misuse. Conclusion: When the disuse and misuse data are viewed together, they strongly support the supposition that personal investment in unaided per- formance affects the likelihood of John Henry effects and intent errors. Application: These results demonstrate the need for a model of operator decision making that takes into account intent as well as appraisal errors. Potential applications include develop- ing interventions to counter the deleterious effects of human–machine competition and intent errors on automation usage decisions

    Agent Transparency for Intelligent Target Identification in the Maritime Domain, and its impact on Operator Performance, Workload and Trust

    Get PDF
    This item is only available electronically.Objective: To examine how increasing the transparency of an intelligent maritime target identification system impacts on operator performance, workload and trust in the intelligent agent. Background: Previous research has shown that operator accuracy improves with increased transparency of an intelligent agent’s decisions and recommendations. This can be at the cost of increased workload and response time, although this has not been found by all studies. Prior studies have predominately focussed on route planning and navigation, and it is unclear if the benefits of agent transparency would apply to other tasks such as target identification. Method: Twenty seven participants were required to identify a number of tracks based on a set of identification criteria and the recommendation of an intelligent agent at three transparency levels in a repeated-measures design. The intelligent agent generated an identification recommendation for each track with different levels of transparency information displayed and participants were required to determine the identity of the track. For each transparency level, 70% of the recommendations made by the intelligent agent were correct, with incorrect recommendation due to additional information that the agent was not aware of, such as information from the ship’s radar. Participants’ identification accuracy and identification time were measured, and surveys on operator subjective workload and subjective trust in the intelligent agent were collected for each transparency level. Results: The results indicated that increased transparency information improved the operators’ sensitivity to the accuracy of the agent’s decisions and produced a greater tendency Agent Transparency for Intelligent Target Identification 33 to accept the agent’s decision. Increased agent transparency facilitated human-agent teaming without increasing workload or response time when correctly accepting the intelligent agent’s decision, but increased the response time when rejecting incorrect intelligent agent’s decisions. Participants also reported a higher level of trust when the intelligent agent was more transparent. Conclusion: This study shows the ability of agent transparency to improve performance without increasing workload. Greater agent transparency is also beneficial in building operator trust in the agent. Application: The current study can inform the design and use of uninhabited vehicles and intelligent agents in the maritime context for target identification. It also demonstrates that providing greater transparency of intelligent agents can improve human-agent teaming performance for a previously unstudied task and domain, and hence suggests broader applicability for the design of intelligent agents.Thesis (M.Psych(Organisational & Human Factors)) -- University of Adelaide, School of Psychology, 201

    Enhancing driving safety and user experience through unobtrusive and function-specific feedback

    Get PDF
    Inappropriate trust in the capabilities of automated driving systems can result in misuse and insufficient monitoring behaviour that impedes safe manual driving performance following takeovers. Previous studies indicate that the communication of system uncertainty can promote appropriate use and monitoring by calibrating trust. However, existing approaches require the driver to regularly glance at the instrument cluster to perceive the changes in uncertainty. This may lead to missed uncertainty changes and user disruptions. Furthermore, the benefits of conveying the uncertainty of the different vehicle functions such as lateral and longitudinal control have yet to be explored. This research addresses these gaps by investigating the impact of unobtrusive and function-specific feedback on driving safety and user experience. Transferring knowledge from other disciplines, several different techniques will be assessed in terms of their suitability for conveying uncertainty in a driving context

    A generalizable method and case application for development and use of the Aviation Systems – Trust Survey (AS-TS).

    Get PDF
    Automated systems are integral in the development of modern aircraft, especially for complex military aircraft. Pilot Trust in Automation (TIA) in these systems is vital for optimizing the pilot-vehicle interface and ensuring pilots use the systems appropriately to complete required tasks. The objective of this research was to develop and validate a TIA scale and survey methodology to identify and mitigate trust deficiencies with automated systems for use in Army Aviation testing. There is currently no standard TIA assessment methodology for U.S. Army aviation pilots that identifies trust deficiencies and potential mitigations. A comprehensive literature review was conducted to identify prominent TIA factors present in similar studies. The compiled list of factors and associated definitions were used in a validation study that utilized the Analytic Hierarchy Process (AHP) as a pair-wise comparison tool to identify TIA factors most relevant to Army pilots. A notional survey, the Aviation Systems – Trust Survey (AS-TS), was developed from the identified factors and pilots were used as subjects in scenario-based testing to establish construct validity for the survey. Exploratory factor analysis was conducted after data collection and a validated survey was produced. A follow-on study interviewed Army test and evaluation experts to refine the survey methodology and ensure appropriate context for the recommended mitigations. A final packet was developed that included instructions for the rating scale, associated item definitions, and recommended mitigations for trust deficiencies. Future research will focus on other Army demographics to determine the generalizability of the AS-TS

    Quantum surveillance and 'shared secrets'. A biometric step too far? CEPS Liberty and Security in Europe, July 2010

    Get PDF
    It is no longer sensible to regard biometrics as having neutral socio-economic, legal and political impacts. Newer generation biometrics are fluid and include behavioural and emotional data that can be combined with other data. Therefore, a range of issues needs to be reviewed in light of the increasing privatisation of ‘security’ that escapes effective, democratic parliamentary and regulatory control and oversight at national, international and EU levels, argues Juliet Lodge, Professor and co-Director of the Jean Monnet European Centre of Excellence at the University of Leeds, U

    The Effects of Automation Expertise, System Confidence, and Image Quality on Trust, Compliance, and Performance

    Get PDF
    This study examined the effects of automation expertise, system confidence, and image quality on automation trust, compliance, and detection performance. One hundred and fifteen participants completed a simulated military target detection task while receiving advice from an imperfect diagnostic aid that varied in expertise (expert vs. novice) and confidence (75% vs. 50% vs. 25% vs. no aid). The task required participants to detect covert enemy targets in simulated synthetic aperture radar (SAR) images. Participants reported whether a target was present or absent, their decision-confidence, and their trust in the diagnostic system\u27s advice. Results indicated that system confidence and automation expertise influenced automation trust, compliance, and measures of detection performance, particularly when image quality was poor. Results also highlighted several incurred costs of system confidence and automation expertise. Participants were more apt to generate false alarms as system confidence increased and when receiving diagnostic advice from the expert system. Data also suggest participants adopted an analogical trust tuning strategy rather than an analytical strategy when evaluating system confidence ratings. This resulted in inappropriate trust when system confidence was low. Theoretical and practical implications regarding the effects of system confidence and automation expertise on automation trust and the design of diagnostic automation are discussed

    The Impact of Trajectory Prediction Uncertainty on Reliance Strategy and Trust Attitude in an Automated Air Traffic Management Environment.

    Get PDF
    Future air traffic environments have the potential to exceed human operator capabilities. In response, air traffic control systems are being modernized to provide automated tools to overcome current-day workload limits. Highly accurate aircraft trajectory predictions are a critical element of the automated tools envisioned as part of the evolution of today\u27s air traffic management system in the United States, known as NextGen. However, automation accuracy is limited due to the effects of external variables: errors such as wind forecast uncertainties. The focus of the Trajectory Prediction Uncertainty simulation at NASA Ames Research center were the effects of varied levels of accuracy on operator\u27s tool use during a time based metering task. The simulation\u27s environment also provided a means to examine the relationship between an operator\u27s reliance strategy and underlying trust attitude. Operators were found to exhibit an underlying trust attitude distinct from their reliance strategies, supporting the strategic use of the Human-Automation trust scale in an air traffic control environment

    The Role of Trust as a Mediator Between System Characteristics and Response Behaviors

    Get PDF
    There have been several theoretical frameworks that acknowledge trust as a prime mediator between system characteristics and automation reliance. Some researchers have operationally defined trust as the behavior exhibited. Other researchers have suggested that although trust may guide operator response behaviors, trust does not completely determine the behavior and advocate the use of subjective measures of trust. Recently, several studies accounting for temporal precedence failed to confirm that trust mediated the relationship between system characteristics and response behavior. The purpose of the current work was to clarify the roles that trust plays in response behavior when interacting with a signaling system. Forty-four participants interacted with a primary flight simulation task and a secondary signaling system task. The signaling system varied in reliability (90% and 60%) within subjects and error bias (false alarm prone and miss prone) between subjects. Analyses indicated that trust partially mediated the relationship between reliability and agreement rate. Trust did not, however, mediate the relationship between reliability and reaction time. Trust also did not mediate the relationships between error bias and reaction time or agreement rate. Analyses of variance generally supported specific behavioral and trust hypotheses, indicating that the paradigm employed produced similar effects on response behaviors and subjective estimates of trust observed in other studies. The results of this study indicate that other mediating variables may offer more predictive power in determining response behaviors. Additionally, strong assumptions of trust acting as the prime mediator and operationally defining trust as a type of behavior should be viewed with caution
    • 

    corecore