433,630 research outputs found

    Formalising trust as a computational concept

    Get PDF
    Trust is a judgement of unquestionable utility - as humans we use it every day of our lives. However, trust has suffered from an imperfect understanding, a plethora of definitions, and informal use in the literature and in everyday life. It is common to say "I trust you, " but what does that mean? This thesis provides a clarification of trust. We present a formalism for trust which provides us with a tool for precise discussion. The formalism is implementable: it can be embedded in an artificial agent, enabling the agent to make trust-based decisions. Its applicability in the domain of Distributed Artificial Intelligence (DAI) is raised. The thesis presents a testbed populated by simple trusting agents which substantiates the utility of the formalism. The formalism provides a step in the direction of a proper understanding and definition of human trust. A contribution of the thesis is its detailed exploration of the possibilities of future work in the area

    Examining Trust and Reliance in Collaborations between Humans and Automated Agents

    Get PDF
    Human trust and reliance in artificial agents is critical to effective collaboration in mixed human computer teams. Understanding the conditions under which humans trust and rely upon automated agent recommendations is important as trust is one of the mechanisms that allow people to interact effectively with a variety of teammates. We conducted exploratory research to investigate how personality characteristics and uncertainty conditions affect human-machine interactions. Participants were asked to determine if two images depicted the same or different people, while simultaneously considering the recommendation of an automated agent. Results of this effort demonstrated a correlation between judgements of agent expertise and user trust. In addition, we found that in conditions of high and low uncertainty, the decision outcomes of participants moved significantly in the direction of the agent’s recommendation. Differences in reported trust in the agent were observed in individuals with low and high levels of extraversion

    Social Relationships and Trust

    Get PDF
    While social relationships play an important role for individuals to cope with missing market institutions, they also limit individuals' range of trading partners. This paper aims at understanding the determinants of trust at various social distances when information asymmetries are present. Among participants from an informal housing area in Cairo we find that the increase in trust following a reduction in social distance comes from the fact that trustors are much more inclined to follow their beliefs when interacting with their friend. When interacting with an ex-ante unknown agent instead, the decision to trust is mainly driven by social preferences. Nevertheless, trustors underestimate their friend's intrinsic motivation to cooperate, leading to a loss in social welfare. We relate this to the agents' inability to signal their trustworthiness in an environment characterized by strong social norms.Trust, hidden action, social distance, solidarity, reciprocity, economic development

    Social Relationships and Trust

    Get PDF
    While social relationships play an important role for individuals to cope with missing market institutions, they also limit individuals' range of trading partners. This paper aims at understanding the determinants of trust at various social distances when information asymmetries are present. Among participants from an informal housing area in Cairo we find that the increase in trust following a reduction in social distance comes from the fact that trustors are much more inclined to follow their beliefs when interacting with their friend. When interacting with an ex-ante unknown agent instead, the decision to trust is mainly driven by social preferences. Nevertheless, trustors underestimate their friend's intrinsic motivation to cooperate, leading to a loss in social welfare. We relate this to the agents' inability to signal their trustworthiness in an environment characterized by strong social norms.trust, hidden action, social distance, solidarity, reciprocity, economic development

    Quantifying Uncertainty

    Get PDF
    Many of us interact with automated agents every day (e.g., Microsoft\u27s Cortana, Apple’s Siri, Amazon’s Alexa, etc.), and decision-makers at all levels of organizations utilize automated systems that are designed to enable better, faster, and more effective decisions. Understanding the conditions under which humans trust and rely upon automated agents recommendations is important, as trust is one of the mechanisms that allows for humans to interact effectively with a variety of teammates. Reliance and trust in automated systems is changing the way we process information, make decisions, and perform tasks. We conducted an experiment to determine the conditions and personality characteristics that affect human-machine interactions. Our analysis focused on the use of an automated decision aid in conditions of uncertainty. We also looked to see how perceptions of an automated decision aid’s ability related to human trust. Last, we explored how extraversion, a broad factor that encompasses the tendency to be energetic, affiliative, and dominant, related to perceptions of trust in the automated agent. We observed that in conditions of uncertainty, human decision outcomes moved in accordance with the recommendation of the agent. In addition, we found a correlation between perceptions of ability and user trust in the automated agent

    Ethical trust and social moral norms simulation : a bio-inspired agent-based modelling approach

    Full text link
    The understanding of the micro-macro link is an urgent need in the study of social systems. The complex adaptive nature of social systems adds to the challenges of understanding social interactions and system feedback and presents substantial scope and potential for extending the frontiers of computer-based research tools such as simulations and agent-based technologies. In this project, we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the development of collective social moral norms as representative sample of the bigger micro-macro link of social systems. We outline our computational model of ethical trust (CMET) informed by research findings from trust, machine ethics and neural science. Guided by the CMET architecture, we discuss key implementation ideas for the simulations of ethical trust and social moral norms

    Analysis of Human and Agent Characteristics on Human-Agent Team Performance and Trust

    Get PDF
    The human-agent team represents a new construct in how the United States Department of Defense is orchestrating mission planning and mission accomplishment. In order for mission planning and accomplishment to be successful, several requirements must be met: a firm understanding of human trust in automated agents, how human and automated agent characteristics influence human-agent team performance, and how humans behave. This thesis applies a combination of modeling techniques and human experimentation to understand the concepts aforementioned. The modeling techniques used include static modeling in SysML activity diagrams and dynamic modeling of both human and agent behavior in IMPRINT. Additionally, this research consisted of human experimentation in a dynamic, event-driven, teaming environment known as Space Navigator. Both the modeling and the experimenting depict that the agent\u27s reliability has a significant effect upon the human-agent team performance. Additionally, this research found that the age, gender, and education level of the human user has a relationship with the perceived trust the user has in the agent. Finally, it was found that patterns of compliant human behavior, archetypes, can be created to classify human users

    Moderators Of Trust And Reliance Across Multiple Decision Aids

    Get PDF
    The present work examines whether user\u27s trust of and reliance on automation, were affected by the manipulations of user\u27s perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness - a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate\u27s actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., \u27what\u27 the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users\u27 trust in automation so that system interfaces can be designed to facilitate users\u27 calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems

    Trust is Not Enough: Examining the Role of Distrust in Human-Autonomy Teams

    Get PDF
    As automation solutions in manufacturing grow more accessible, there are consistent calls to augment capabilities of humans through the use of autonomous agents, leading to human-autonomy teams (HATs). Many constructs from the human-human teaming literatures are being studied in the context of HATs, such as affective emergent states. Among these, trust has been demonstrated to play a critical role in both human teams and HATs, particularly when considering the reliability of the agent performance. However, the HAT literature fails to account for the distinction between trust and distrust. Consequently, this study investigates the effects of both trust and distrust in HATs in order to broaden the current understanding of trust dynamics in HATs and improve team functioning. Findings were inclusive, but a path forward was discussed regarding self-report and unobtrusive measures of trust and distrust in HATs
    • 

    corecore