10 research outputs found

    A Classification Model for Sensing Human Trust in Machines Using EEG and GSR

    Full text link
    Today, intelligent machines \emph{interact and collaborate} with humans in a way that demands a greater level of trust between human and machine. A first step towards building intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real-time. In this paper, two approaches for developing classifier-based empirical trust sensor models are presented that specifically use electroencephalography (EEG) and galvanic skin response (GSR) measurements. Human subject data collected from 45 participants is used for feature extraction, feature selection, classifier training, and model validation. The first approach considers a general set of psychophysiological features across all participants as the input variables and trains a classifier-based model for each participant, resulting in a trust sensor model based on the general feature set (i.e., a "general trust sensor model"). The second approach considers a customized feature set for each individual and trains a classifier-based model using that feature set, resulting in improved mean accuracy but at the expense of an increase in training time. This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor. Implications of the work, in the context of trust management algorithm design for intelligent machines, are also discussed.Comment: 20 page

    Context-Adaptive Management of Drivers’ Trust in Automated Vehicles

    Full text link
    Automated vehicles (AVs) that intelligently interact with drivers must build a trustworthy relationship with them. A calibrated level of trust is fundamental for the AV and the driver to collaborate as a team. Techniques that allow AVs to perceive drivers’ trust from drivers’ behaviors and react accordingly are, therefore, needed for context-aware systems designed to avoid trust miscalibrations. This letter proposes a framework for the management of drivers’ trust in AVs. The framework is based on the identification of trust miscalibrations (when drivers’ undertrust or overtrust the AV) and on the activation of different communication styles to encourage or warn the driver when deemed necessary. Our results show that the management framework is effective, increasing (decreasing) trust of undertrusting (overtrusting) drivers, and reducing the average trust miscalibration time periods by approximately 40%. The framework is applicable for the design of SAE Level 3 automated driving systems and has the potential to improve the performance and safety of driver–AV teams.U.S. Army CCDC/GVSCAutomotive Research CenterNational Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162571/1/Azevedo-Sa et al. 2020 with doi.pdfSEL

    Toward Adaptive Trust Calibration for Level 2 Driving Automation

    Full text link
    Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.Comment: 10 pages, 8 figure

    Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability

    Get PDF
    Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public's trust in AVs. Many factors can influence people's trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people's dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his or her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously

    Comparison of Machine Learning Techniques on Trust Detection Using EEG

    Get PDF
    Trust is a pillar of society and is a fundamental aspect in every relationship. With the use of automated agents in todays workforce exponentially growing, being able to actively monitor an individuals trust level that is working with the automation is becoming increasingly more important. Humans often have miscalibrated trust in automation and therefore are prone to making costly mistakes. Since deciding to trust or distrust has been shown to correlate with specific brain activity, it is thought that there are EEG signals which are associated with this decision. Using both a human-human trust and a human-machine trust EEG dataset from past research, within-participant, cross-participant, and cross-task cross-participant trust detection was attempted. Six machine learning models, logistic regression, LDA, QDA, SVM, RFC, and an ANN, were used for each experiment. Multiple within-participant models had balanced accuracies greater than 70.00 , but no cross-participant or cross-participant cross task models achieved this

    Unveiling AI Aversion: Understanding Antecedents and Task Complexity Effects

    Get PDF
    Artificial Intelligence (AI) has generated significant interest due to its potential to augment human intelligence. However, user attitudes towards AI are diverse, with some individuals embracing it enthusiastically while others harbor concerns and actively avoid its use. This two essays\u27 dissertation explores the reasons behind user aversion to AI. In the first essay, I develop a concise research model to explain users\u27 AI aversion based on the theory of effective use and the adaptive structuration theory. I then employ an online experiment to test my hypotheses empirically. The multigroup analysis by Structural Equation Modeling shows that users\u27 perceptions of human dissimilarity, AI bias, and social influence strongly drive AI aversion. Moreover, I find a significant difference between the simple and the complex task groups. This study reveals why users avert using AI by systematically examining the factors related to technology, user, task, and environment, thus making a significant contribution to the emerging field of AI aversion research. Next, while trust and distrust have been recognized as influential factors shaping users\u27 attitudes towards IT artifacts, their intricate relationship with task characteristics and their impact on AI aversion remains largely unexplored. In my second essay, I conduct an online randomized controlled experiment on Amazon Mechanical Turk to bridge this critical research gap. My comprehensive analytic approach, including structural equation modeling (SEM), ANOVA, and PROCESS conditional analysis, allowed me to shed light on the intricate web of factors influencing users\u27 AI aversion. I discovered that distrust and trust mediate between task complexity and AI aversion. Moreover, this study unveiled intriguing differences in these mediated relationships between subjective and objective task groups. Specifically, my findings demonstrate that, for objective tasks, task complexity can significantly increase aversion by reducing trust and significantly decrease aversion by reducing distrust. In contrast, for subjective tasks, task complexity only significantly increases aversion by enhancing distrust. By considering various task characteristics and recognizing trust and distrust as vital mediators, my research not only pushes the boundaries of the human-AI literature but also significantly contributes to the field of AI aversion

    Helping people see their place in community immunity : a dynamic web-based visualization

    Get PDF
    L'immunité collective - parfois appelée immunité de groupe - est un concept important et complexe de la santé publique qui n'est pas toujours bien compris par le grand public. Cette incompréhension est particulièrement prononcée chez les personnes qui hésitent à se faire vacciner. Des recherches antérieures ont suggéré que la décision d'obtenir un vaccin pour soi ou son enfant est principalement motivée par les avantages et les risques individuels, plutôt que par les avantages pour la communauté. Cependant, peu de recherches ont identifié des moyens d'aider les gens à comprendre le fonctionnement de l'immunité collective. Il y a également eu relativement peu de recherches sur le rôle des émotions sur la perception du risque et sur les connaissances et les comportements relatifs à l'immunité collective. La visualisation d'informations est un mécanisme de communication puissant pour transmettre des informations et des données sur les risques, car elle permet de présenter rapidement des concepts complexes de manière claire et attrayante. La visualisation d'informations pourrait également permettre d'influencer les émotions. La première partie de ce travail visait à examiner systématiquement les interventions conçues pour communiquer au grand public ce qu'est l'immunité collective et comment elle fonctionne. Cet examen systématique a montré qu'il existe relativement peu de preuves scientifiques des effets de stratégies de communication sur l'immunité collective. Il existe un certain nombre d'interventions disponibles en ligne pour transmettre le concept d'immunité collective, mais leurs effets ont rarement été évalués et aucune étude n'a évalué les effets des interventions sur les émotions. La deuxième partie de ce travail visait à concevoir une application Web au sujet de l'immunité collective et à optimiser cette application en fonction des réponses cognitives et émotionnelles des utilisateurs. Dans notre application, les utilisateurs sont invités à construire leur communauté en créant un personnage qui les représente (leur avatar) et huit autres personnages qui représentent des personnes de leur entourage, par exemple leur famille ou leurs collègues de travail. L'application intègre ces personnages dans une visualisation animée de deux minutes montrant comment différents paramètres (par exemple, la couverture vaccinale et les contacts au sein des communautés) influencent l'immunité collective. Cette étude a montré que notre animation avec des avatars personnalisés peut aider les gens à comprendre leur rôle dans la santé de la population. Notre application s'est révélée être une méthode de communication prometteuse pour expliquer la relation entre les comportements individuels et la santé de la communauté. Elle offre une stratégie potentielle pour concevoir du matériel de communication sur des sujets complexes tels que la santé ou l'immunité collective. La troisième et dernière partie de ce travail visait à évaluer les effets de notre application Web montrant le fonctionnement de l'immunité collective sur la perception des risques, sur les émotions, sur la confiance dans les informations, sur les connaissances et sur les intentions en matière de vaccination. Dans le cadre d'un vaste essai contrôlé randomisé en ligne et factoriel, notre application a influencé tous les résultats dans le sens souhaité, en particulier chez les personnes ayant une vision du monde plus collectiviste. Cette étude est encore plus pertinente aujourd'hui, alors que les pays du monde entier mènent des campagnes de vaccination contre la COVID-19. Notre application est d'ailleurs présentement utilisée dans un outil d'aide à la décision en ligne, permettant aux gens de prendre une décision éclairée par rapport aux vaccins contre la COVID-19 pour eux-mêmes ou leurs enfants.Community immunity--sometimes referred to as herd immunity--is an important and complex concept in public health that is not always well-understood by members of the general public. This lack of understanding is particularly pronounced among people who are vaccine hesitant. Previous research has suggested that decisions about whether or not to vaccinate oneself or one's child are primarily driven by benefits and risks to the individual, with community-level benefits being less compelling. However, little research has identified ways to help people understand how community immunity works, and there has also been relatively little research investigating the role of emotion in risk perceptions, knowledge, and behavior relevant to community immunity. Visualization is a powerful communication mechanism for communicating information and data, including information and data about risk, because it enables rapid presentation of complex concepts in understandable, compelling ways. Visualization may also influence emotions. The first part of this work was aimed to systematically review interventions designed to communicate what community immunity is and how community immunity works to members of the general public. This systematic review demonstrates that there is relatively little evidence about the effects of communicating about community immunity. There are a number of interventions available online for conveying the concept of community immunity, but very few interventions were evaluated for its effects and no studies evaluated the effects of interventions on emotions. The second part aimed to design a web application about community immunity and optimize it based on users' cognitive and emotional responses. In our application, people build their own community by creating an avatar representing themselves and 8 other avatars representing people around them, for example, their family or coworkers. The application integrates these avatars in a 2-min visualization showing how different parameters (eg, vaccine coverage, and contact within communities) influence community immunity. This study found out that applications with personalized avatars may help people understand their individual role in population health. Our application showed promise as a method of communicating the relationship between individual behaviour and community health. It offers a potential roadmap for designing health communication materials for complex topics such as community immunity. The third and last part of this work aimed to evaluate the effects of our online application showing how community immunity (herd immunity) works on risk perception, emotions, trust in information, knowledge and intentions regarding vaccination. In a large, factorial, online randomized controlled trial, our application influenced all outcomes in the desired directions, particularly among people who have more collectivist worldviews. This work is increasingly relevant as countries around the world carry out COVID-19 vaccination campaigns. Accordingly, our application is currently being used in an online decision aid to support people making evidence-informed decisions about COVID-19 vaccines for themselves or their children

    Improving Collaboration Between Drivers and Automated Vehicles with Trust Processing Methods

    Full text link
    Trust has gained attention in the Human-Robot Interaction (HRI) field, as it is considered an antecedent of people's reliance on machines. In general, people are likely to rely on and use machines they trust, and to refrain from using machines they do not trust. Recent advances in robotic perception technologies open paths for the development of machines that can be aware of people's trust by observing their human behaviors. This dissertation explores the role of trust in the interactions between humans and robots, particularly Automated Vehicles (AVs). Novel methods and models are proposed for perceiving and processing drivers' trust in AVs and for determining both humans' natural trust and robots' artificial trust. Two high-level problems are addressed in this dissertation: (1) the problem of avoiding or reducing miscalibrations of drivers' trust in AVs, and (2) the problem of how trust can be used to dynamically allocate tasks between a human and a robot that collaborate. A complete solution is proposed for the problem of avoiding or reducing trust miscalibrations. This solution combines methods for estimating and influencing drivers' trust through interactions with the AV. Three main contributions stem from that solution: (i) the characterization of risk factors that affect drivers’ trust in AVs, which provided theoretical evidence for the development of a linear model for driver trust in AVs; (ii) the development of a new method for real-time trust estimation, which leveraged the trust linear model mentioned above for the implementation of a Kalman-filter-based approach, able to provide numerical estimates from the processing of drivers' behavioral measurements; and (iii) the development of a new method for trust calibration, which identifies trust miscalibration instances from comparisons between drivers' trust in the AV and that AV's capabilities, and triggers messages from the AV to the driver. These messages are effective for encouraging or warning drivers that are undertrusting or overtrusting the AV capabilities respectively as shown by the obtained results. Although the development of a trust-based solution for dynamically allocating tasks between a human and a robot (i.e., the second high-level problem addressed in this dissertation) remains an open problem, we take a step forward in that direction. The fourth contribution of this dissertation is the development of a unified bi-directional model for predicting natural and artificial trust. This trust model is based on mathematical representations of both the trustee agent's capabilities and the required capabilities for the execution of a task. Trust emerges from comparisons between the agent capabilities and task requirements, roughly replicating the following logic: if a trustee agent's capabilities exceed the requirements for executing a certain task, then the agent can be highly trusted (to execute that task); conversely, if that trustee agent's capabilities fall short of that task requirements, trust should be low. In this trust model, the agent's capabilities are represented by random variables that are dynamically updated over interactions between the trustor and the trustee whenever the trustee is successful or fails in the execution of a task. These capability representations allow for the numerical computation of human's trust or robot's trust, which is represented by the probability of a given trustee agent to execute a given task successfully.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169615/1/azevedo_1.pd

    Internet and Biometric Web Based Business Management Decision Support

    Get PDF
    Internet and Biometric Web Based Business Management Decision Support MICROBE MOOC material prepared under IO1/A5 Development of the MICROBE personalized MOOCs content and teaching materials Prepared by: A. Kaklauskas, A. Banaitis, I. Ubarte Vilnius Gediminas Technical University, Lithuania Project No: 2020-1-LT01-KA203-07810

    Applied and laboratory-based autonomic and neurophysiological monitoring during sustained attention tasks

    Get PDF
    Fluctuations during sustained attention can cause momentary lapses in performance which can have a significant impact on safety and wellbeing. However, it is less clear how unrelated tasks impact current task processes, and whether potential disturbances can be detected by autonomic and central nervous system measures in naturalistic settings. In a series of five experiments, I sought to investigate how prior attentional load impacts semi-naturalistic tasks of sustained attention, and whether neurophysiological and psychophysiological monitoring of continuous task processes and performance could capture attentional lapses. The first experiment explored various non-invasive electrophysiological and subjective methods during multitasking. The second experiment employed a manipulation of multitasking, task switching, to attempt to unravel the negative lasting impacts of multitasking on neural oscillatory activity, while the third experiment employed a similar paradigm in a semi-naturalistic environment of simulated driving. The fourth experiment explored the feasibility of measuring changes in autonomic processing during a naturalistic sustained monitoring task, autonomous driving, while the fifth experiment investigated the visual demands and acceptability of a biological based monitoring system. The results revealed several findings. While the first experiment demonstrated that only self-report ratings were able to successfully disentangle attentional load during multitasking; the second and third experiment revealed deficits in parieto-occipital alpha activity and continuous performance depending on the attentional load of a previous unrelated task. The fourth experiment demonstrated increased sympathetic activity and a smaller distribution of fixations during an unexpected event in autonomous driving, while the fifth experiment revealed the acceptability of a biological based monitoring system although further research is needed to unpick the effects on attention. Overall, the results of this thesis help to provide insight into how autonomic and central processes manifest during semi-naturalistic sustained attention tasks. It also provides support for a neuro- or biofeedback system to improve safety and wellbeing
    corecore