40 research outputs found

    Trust in Queer Human-Robot Interaction

    Full text link
    Human-robot interaction (HRI) systems need to build trust with people of diverse identities. This position paper argues that queer (LGBTQIA+) people must be included in the design and evaluation of HRI systems to ensure their trust in and acceptance of robots. Queer people have faced discrimination and harm from artificial intelligence and robotic systems. Despite calls for increased diversity and inclusion, HRI has not systemically addressed queer issues. This paper suggests three approaches to address trust in queer HRI: diversifying human-subject pools, centering queer people in HRI studies, and contextualizing measures of trust.Comment: In SCRITA 2023 Workshop Proceedings (arXiv:2311.05401) held in conjunction with 32nd IEEE International Conference on Robot & Human Interactive Communication, 28/08 - 31/08 2023, Busan (Korea

    The Impact of Trajectory Prediction Uncertainty on Reliance Strategy and Trust Attitude in an Automated Air Traffic Management Environment.

    Get PDF
    Future air traffic environments have the potential to exceed human operator capabilities. In response, air traffic control systems are being modernized to provide automated tools to overcome current-day workload limits. Highly accurate aircraft trajectory predictions are a critical element of the automated tools envisioned as part of the evolution of today\u27s air traffic management system in the United States, known as NextGen. However, automation accuracy is limited due to the effects of external variables: errors such as wind forecast uncertainties. The focus of the Trajectory Prediction Uncertainty simulation at NASA Ames Research center were the effects of varied levels of accuracy on operator\u27s tool use during a time based metering task. The simulation\u27s environment also provided a means to examine the relationship between an operator\u27s reliance strategy and underlying trust attitude. Operators were found to exhibit an underlying trust attitude distinct from their reliance strategies, supporting the strategic use of the Human-Automation trust scale in an air traffic control environment

    Augmented Reality for Maintenance Tasks with ChatGPT for Automated Text-to-Action

    Full text link
    Advancements in sensor technology, artificial intelligence (AI), and augmented reality (AR) have unlocked opportunities across various domains. AR and large language models like GPT have witnessed substantial progress and are increasingly being employed in diverse fields. One such promising application is in operations and maintenance (O&M). O&M tasks often involve complex procedures and sequences that can be challenging to memorize and execute correctly, particularly for novices or under high-stress situations. By marrying the advantages of superimposing virtual objects onto the physical world, and generating human-like text using GPT, we can revolutionize O&M operations. This study introduces a system that combines AR, Optical Character Recognition (OCR), and the GPT language model to optimize user performance while offering trustworthy interactions and alleviating workload in O&M tasks. This system provides an interactive virtual environment controlled by the Unity game engine, facilitating a seamless interaction between virtual and physical realities. A case study (N=15) is conducted to illustrate the findings and answer the research questions. The results indicate that users can complete similarly challenging tasks in less time using our proposed AR and AI system. Moreover, the collected data also suggests a reduction in cognitive load and an increase in trust when executing the same operations using the AR and AI system.Comment: 36 page

    The Importance of Risk Management for the Introduction of Modern Warehouse Technologies

    Get PDF
    The purpose of the study is to determine whether the presence of risk management in a warehouse requires the implementation of modern warehouse technology. On the basis of the literature analysis, it was possible to determine that there is a correlation between the presence of the highest level of risk management and the use of modern warehouse technology in individual warehousing processes. For this purpose, a statistical analysis was carried out on a sample of companies operating in the Slovenian automotive industry. The results did not reveal a tangible correlation between the presence of risk management with the use of individual modern warehouse technology, the motivation for its use and errors in its use. The results of the study therefore, highlight the problems that are present in the warehousing system of the Slovenian companies in the automotive industry, which are related to substandard technological equipment in the warehouses and to the discrepancy between the level of manufacturing automation and the level of warehousing automation. The results are important for the Slovenian automotive industry in terms of the implementation of modern warehouse technology in the high-tech automotive industry.</p

    Trust in vehicle technology

    Get PDF
    Driver trust has potentially important implications for how vehicle technology is used and interacted with. In this paper, it will be seen how driver trust functions and how it can be understood and manipulated by insightful vehicle design. It will review the theoretical literature to define steps that can be taken to establish trust in vehicle technology in the first place, maintain trust in the long term, and even re-establish trust that has been lost along the way. The paper presents a synthesis of the wider trust literature, integrates key trust parameters, describes practically how to measure trust, and presents a set of principles for vehicle designers to use in assessing existing design decisions and justify new ones

    Trust in AutoML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems

    Full text link
    We explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies -- qualitative interviews, a controlled experiment, and a card-sorting task -- to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.Comment: IUI 202

    Trustworthy Transparency by Design

    Full text link
    Individuals lack oversight over systems that process their data. This can lead to discrimination and hidden biases that are hard to uncover. Recent data protection legislation tries to tackle these issues, but it is inadequate. It does not prevent data misusage while stifling sensible use cases for data. We think the conflict between data protection and increasingly data-based systems should be solved differently. When access to data is given, all usages should be made transparent to the data subjects. This enables their data sovereignty, allowing individuals to benefit from sensible data usage while addressing potentially harmful consequences of data misusage. We contribute to this with a technical concept and an empirical evaluation. First, we conceptualize a transparency framework for software design, incorporating research on user trust and experience. Second, we instantiate and empirically evaluate the framework in a focus group study over three months, centering on the user perspective. Our transparency framework enables developing software that incorporates transparency in its design. The evaluation shows that it satisfies usability and trustworthiness requirements. The provided transparency is experienced as beneficial and participants feel empowered by it. This shows that our framework enables Trustworthy Transparency by Design

    To Trust or Distrust Trust Measures: Validating Questionnaires for Trust in AI

    Full text link
    Despite the importance of trust in human-AI interactions, researchers must adopt questionnaires from other disciplines that lack validation in the AI context. Motivated by the need for reliable and valid measures, we investigated the psychometric quality of two trust questionnaires, the Trust between People and Automation scale (TPA) by Jian et al. (2000) and the Trust Scale for the AI Context (TAI) by Hoffman et al. (2023). In a pre-registered online experiment (N = 1485), participants observed interactions with trustworthy and untrustworthy AI (autonomous vehicle and chatbot). Results support the psychometric quality of the TAI while revealing opportunities to improve the TPA, which we outline in our recommendations for using the two questionnaires. Furthermore, our findings provide additional empirical evidence of trust and distrust as two distinct constructs that may coexist independently. Building on our findings, we highlight the opportunities and added value of measuring both trust and distrust in human-AI research and advocate for further work on both constructs

    Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance

    Get PDF
    If a user is presented an AI system that portends to explain how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? This question entails some key concepts of measurement such as explanation goodness and trust. We present methods for enabling developers and researchers to: (1) Assess the a priori goodness of explanations, (2) Assess users\u27 satisfaction with explanations, (3) Reveal user\u27s mental model of an AI system, (4) Assess user\u27s curiosity or need for explanations, (5) Assess whether the user\u27s trust and reliance on the AI are appropriate, and finally, (6) Assess how the human-XAI work system performs. The methods we present derive from our integration of extensive research literatures and our own psychometric evaluations. We point to the previous research that led to the measurement scales which we aggregated and tailored specifically for the XAI context. Scales are presented in sufficient detail to enable their use by XAI researchers. For Mental Model assessment and Work System Performance, XAI researchers have choices. We point to a number of methods, expressed in terms of methods\u27 strengths and weaknesses, and pertinent measurement issues
    corecore