16 research outputs found

    Impact of agent reliability and predictability on trust in real time human-agent collaboration

    Get PDF
    Trust is a prerequisite for effective human-agent collaboration. While past work has studied how trust relates to an agent's reliability, it has been mainly carried out in turn based scenarios, rather than during real-time ones. Previous research identified the performance of an agent as a key factor influencing trust. In this work, we posit that an agent's predictability also plays an important role in the trust relationship, which may be observed based on users' interactions. We designed a 2x2 within-groups experiment with two baseline conditions: (1) no agent (users' individual performance), and (2) near-flawless agent (upper bound). Participants took part in an interactive aiming task where they had to collaborate with different agents that varied in terms of their predictability, and were controlled in terms of their performance. Our results show that agents whose behaviours are easier to predict have a more positive impact on task performance, reliance and trust while reducing cognitive workload. In addition, we modelled the human-agent trust relationship and demonstrated that it is possible to reliably predict users' trust ratings using real-time interaction data. This work seeks to pave the way for the development of trust-aware agents capable of adapting and responding more appropriately to users

    Group trust dynamics during a risky driving experience in a Tesla Model X

    Get PDF
    The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group–vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology

    Lessons Learned About Designing and Conducting Studies From HRI Experts

    Get PDF
    The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees’ feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants’ responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot’s limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research

    Dopamine Beta Hydroxylase Genotype Identifies Individuals Less Susceptible to Bias in Computer-Assisted Decision Making

    Get PDF
    Computerized aiding systems can assist human decision makers in complex tasks but can impair performance when they provide incorrect advice that humans erroneously follow, a phenomenon known as “automation bias.” The extent to which people exhibit automation bias varies significantly and may reflect inter-individual variation in the capacity of working memory and the efficiency of executive function, both of which are highly heritable and under dopaminergic and noradrenergic control in prefrontal cortex. The dopamine beta hydroxylase (DBH) gene is thought to regulate the differential availability of dopamine and norepinephrine in prefrontal cortex. We therefore examined decision-making performance under imperfect computer aiding in 100 participants performing a simulated command and control task. Based on two single nucleotide polymorphism (SNPs) of the DBH gene, −1041 C/T (rs1611115) and 444 G/A (rs1108580), participants were divided into groups of low and high DBH enzyme activity, where low enzyme activity is associated with greater dopamine relative to norepinephrine levels in cortex. Compared to those in the high DBH enzyme activity group, individuals in the low DBH enzyme activity group were more accurate and speedier in their decisions when incorrect advice was given and verified automation recommendations more frequently. These results indicate that a gene that regulates relative prefrontal cortex dopamine availability, DBH, can identify those individuals who are less susceptible to bias in using computerized decision-aiding systems

    Lessons Learned About Designing and Conducting Studies From HRI Experts.

    Get PDF
    The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research

    Transforming growth factor-β in breast cancer: too much, too late

    Get PDF
    The contribution of transforming growth factor (TGF)β to breast cancer has been studied from a myriad perspectives since seminal studies more than two decades ago. Although the action of TGFβ as a canonical tumor suppressor in breast is without a doubt, there is compelling evidence that TGFβ is frequently subverted in a malignant plexus that drives breast cancer. New knowledge that TGFβ regulates the DNA damage response, which underlies cancer therapy, reveals another facet of TGFβ biology that impedes cancer control. Too much TGFβ, too late in cancer progression is the fundamental motivation for pharmaceutical inhibition

    Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents

    Get PDF
    With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm’s degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms

    Politeness In Machine-Human And Human-Human Interaction

    No full text
    Computers communicate with humans in ways that increasingly resemble interactions between humans. Nuances in expression and responses to human behavior become more sophisticated, and they approach those of human-human interaction. The question arises whether we want systems eventually to behave like humans, or whether systems should, even when much more developed, still adhere to rules that are different from the rules governing interpersonal communication. The panel addresses this issue from various perspectives, eventually aiming to gain some insights into the question of the direction to which the development of machine-human communication and the etiquette implemented in the systems should move

    Politeness In Machine-Human And Human-Human Interaction

    No full text
    Computers communicate with humans in ways that increasingly resemble interactions between humans. Nuances in expression and responses to human behavior become more sophisticated, and they approach those of human-human interaction. The question arises whether we want systems eventually to behave like humans, or whether systems should, even when much more developed, still adhere to rules that are different from the rules governing interpersonal communication. The panel addresses this issue from various perspectives, eventually aiming to gain some insights into the question of the direction to which the development of machine-human communication and the etiquette implemented in the systems should move

    A Meta-Analysis Of Factors Affecting Trust In Human-Robot Interaction

    No full text
    Objective: We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI).Background: To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice.Method: Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes.Results: The overall correlational effect size for trust was r- = +0.26, with an experimental effect size of d- = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role.Conclusion: Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors.Application: The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified. © 2011, Human Factors and Ergonomics Society
    corecore