104 research outputs found

    Improving Human-Machine Collaboration Through Transparency-based Feedback – Part I: Human Trust and Workload Model

    Get PDF
    In this paper, we establish a partially observable Markov decision process(POMDP) model framework that captures dynamic changes in human trust and workload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and workload with respect to the system’s reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation probabilities of the POMDP model and analyze the trust-workload behavior of humans. Our results indicate that higher transparency is more likely to increase human trust when the existing trust is low but also is more likely to decrease trust when it is already high. Furthermore, we show that by using high transparency, the workload of the human is always likely to increase. In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-workload behavior towards improving human-machine collaboration

    Reward Shaping for Building Trustworthy Robots in Sequential Human-Robot Interaction

    Full text link
    Trust-aware human-robot interaction (HRI) has received increasing research attention, as trust has been shown to be a crucial factor for effective HRI. Research in trust-aware HRI discovered a dilemma -- maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. In this work, we address this dilemma by formulating the HRI process as a two-player Markov game and utilizing the reward-shaping technique to improve human trust while limiting performance loss. Specifically, we show that when the shaping reward is potential-based, the performance loss can be bounded by the potential functions evaluated at the final states of the Markov game. We apply the proposed framework to the experience-based trust model, resulting in a linear program that can be efficiently solved and deployed in real-world applications. We evaluate the proposed framework in a simulation scenario where a human-robot team performs a search-and-rescue mission. The results demonstrate that the proposed framework successfully modifies the robot's optimal policy, enabling it to increase human trust at a minimal task performance cost.Comment: In Proceedings of 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Optimising Outcomes of Human-Agent Collaboration using Trust Calibration

    Full text link
    As collaborative agents are implemented within everyday environments and the workforce, user trust in these agents becomes critical to consider. Trust affects user decision making, rendering it an essential component to consider when designing for successful Human-Agent Collaboration (HAC). The purpose of this work is to investigate the relationship between user trust and decision making with the overall aim of providing a trust calibration methodology to achieve the goals and optimise the outcomes of HAC. Recommender systems are used as a testbed for investigation, offering insight on human collaboration with dyadic decision domains. Four studies are conducted and include in-person, online, and simulation experiments. The first study provides evidence of a relationship between user perception of a collaborative agent and trust. Outcomes of the second study demonstrate that initial trust can be used to predict task outcome during HAC, with Signal Detection Theory (SDT) introduced as a method to interpret user decision making in-task. The third study provides evidence to suggest that the implementation of different features within a single agent's interface influences user perception and trust, subsequently impacting outcomes of HAC. Finally, a computational trust calibration methodology harnessing a Partially Observable Markov Decision Process (POMDP) model and SDT is presented and assessed, providing an improved understanding of the mechanisms governing user trust and its relationship with decision making and collaborative task performance during HAC. The contributions from this work address important gaps within the HAC literature. The implications of the proposed methodology and its application to alternative domains are identified and discussed

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi
    • …
    corecore