27 research outputs found

    Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management

    No full text
    Objective: We investigated the effects of level of agent transparency on operator performance, trust, and workload in a context of human-agent teaming for multirobot management. Background: Participants played the role of a heterogeneous unmanned vehicle (UxV) operator and were instructed to complete various missions by giving orders to UxVs through a computer interface. An intelligent agent (IA) assisted the participant by recommending two plans - a top recommendation and a secondary recommendation - for every mission. Method: A within-subjects design with three levels of agent transparency was employed in the present experiment. There were eight missions in each of three experimental blocks, grouped by level of transparency. During each experimental block, the IA was incorrect three out of eight times due to external information (e.g., commander\u27s intent and intelligence). Operator performance, trust, workload, and usability data were collected. Results: Results indicate that operator performance, trust, and perceived usability increased as a function of transparency level. Subjective and objective workload data indicate that participants\u27 workload did not increase as a function of transparency. Furthermore, response time did not increase as a function of transparency. Conclusion: Unlike previous research, which showed that increased transparency resulted in increased performance and trust calibration at the cost of greater workload and longer response time, our results support the benefits of transparency for performance effectiveness without additional costs. Application: The current results will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams

    Standardized remission criteria in schizophrenia.

    No full text
    OBJECTIVE: Recent work has focussed on schizophrenia as a 'deficit' state but little attention has been paid to defining illness plasticity in terms of symptomatic remission. METHOD: A qualitative review of a recently proposed concept of remission [N.C. Andreasen, W.T. Carpenter Jr, J.M. Kane, R.A. Lasser, S.R. Marder, D.R. Weinberger (2005) Am J Psychiatry 162: 441] is presented. RESULTS: The proposed definition of remission is conceptually viable, and can be easily implemented in clinical trials and clinical practice. Its increasing acceptance may reset expectations of treatment to a higher level, improve documentation of clinical status and facilitate dialogue on treatment expectations. The availability of validated outcome measures based on remission will enhance the conduct and reporting of clinical investigations, and could facilitate the design and interpretation of new studies on cognition and functional outcomes. While useful as a concept, it is important to consider that remission is distinct from recovery. CONCLUSION: The introduction of standardized remission criteria may offer significant opportunities for clinical practice, health services research and clinical trials

    A Meta-Analysis of Factors Influencing the Development of Trust in Automation

    No full text
    Objective: We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Background: Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. Method: We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. Results: The overall effect size of all factors on trust development was g = +0.48, and the correlational effect was r = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (g = +0.49; r = +0.16) and automation-related (g = +0.53; r = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. Conclusion: Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. Application: This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments
    corecore