64,564 research outputs found

    To Trust or to Monitor: A Dynamic Analysis

    Get PDF
    In a principal?agent framework, principals can mitigate moral hazard problems not only through extrinsic incentives such as monitoring, but also through agents intrinsic trustworthiness. Their relative usage, however, changes over time and varies across societies. This paper attempts to explain this phenomenon by endogenizing agent trustworthiness as a response to potential returns. When monitoring becomes relatively cheaper over time, agents acquire lower trustworthiness, which may actually drive up the overall governance cost in society. Across societies, those giving employees lower weights in choosing governance methods tend to have higher monitoring intensities and lower trust. These results are consistent with the empirical evidence.Monitoring , Trustworthiness , Trust , Screening , Economic Governance

    Supply chain coordination with information sharing in the presence of trust and trustworthiness: a behavioral model

    Get PDF
    The strategic use of private information causes efficiency losses in traditional principal-agent settings. One stream of research states that these efficiency losses cannot be overcome if all agents use their private information strategically. Yet, another stream of research highlights the importance of communication, trust and trustworthiness in supply chain management. The underlying work links the concepts of communication, trust and trustworthiness to a traditional principal-agent setting in a supply chain environment. Surprisingly, it can be shown that communication and trust can actually lead to increasing efficiency losses although there is a substantial level of trustworthiness.

    To Trust or to Monitor : A Dynamic Analysis

    Get PDF
    In a principalagent framework, principals can mitigate moral hazard problems not only through extrinsic incentives such as monitoring, but also through agents intrinsic trustworthiness. Their relative usage, however, changes over time and varies across societies. This paper attempts to explain this phenomenon by endogenizing agent trustworthiness as a response to potential returns. When monitoring becomes relatively cheaper over time, agents acquire lower trustworthiness, which may actually drive up the overall governance cost in society. Across societies, those giving employees lower weights in choosing governance methods tend to have higher monitoring intensities and lower trust. These results are consistent with the empirical evidence.Monitoring, Trustworthiness, Trust, Screening, Economic Governance

    Defining trustworthiness in service oriented environment

    Get PDF
    In the existing literature we note that there has been no effort in proposing a definition of trustworthiness. In this paper, we propose a definition of trustworthiness with focus on service oriented environments. In addition, we propose and discuss in detail the various factors which can affect the trustworthiness assigned by the trusting agent to the trusted agent

    THE HIDDEN COSTS AND RETURNS OF INCENTIVES — TRUST AND TRUSTWORTHINESS AMONG CEOs

    Get PDF
    We examine experimentally how Chief Executive Officers (CEOs) respond to incentives and how they provide incentives in situations requiring trust and trustworthiness. As a control we compare the behavior of CEOs with the behavior of students. We find that CEOs are considerably more trusting and exhibit more trustworthiness than students—thus reaching substantially higher efficiency levels than students. Moreover, we find that, for CEOs as well as for students, incentives based on explicit threats to penalize shirking backfire by inducing less trustworthy behavior—giving rise to hidden costs of incentives. However, the availability of penalizing incentives also creates hidden returns: if a principal expresses trust by voluntarily refraining from implementing the punishment threat, the agent exhibits significantly more trustworthiness than if the punishment threat is not available. Thus trust seems to reinforce trustworthy behavior. Overall, trustworthiness is highest if the threat to punish is available but not used, while it is lowest if the threat to punish is used. Paradoxically, however, most CEOs and students use the punishment threat, although CEOs use it significantly less.HIDDEN COSTS, RETURNS OF INCENTIVES, TRUST, TRUSTWORTHINESS, CEO

    Local and Global Trust Based on the Concept of Promises

    Get PDF
    We use the notion of a promise to define local trust between agents possessing autonomous decision-making. An agent is trustworthy if it is expected that it will keep a promise. This definition satisfies most commonplace meanings of trust. Reputation is then an estimation of this expectation value that is passed on from agent to agent. Our definition distinguishes types of trust, for different behaviours, and decouples the concept of agent reliability from the behaviour on which the judgement is based. We show, however, that trust is fundamentally heuristic, as it provides insufficient information for agents to make a rational judgement. A global trustworthiness, or community trust can be defined by a proportional, self-consistent voting process, as a weighted eigenvector-centrality function of the promise theoretical graph

    The Hidden Costs and Returns of Incentives - Trust and Trustworthiness among CEOs

    Get PDF
    We examine experimentally how Chief Executive Officers (CEOs) respond to incentives and how they provide incentives in situations requiring trust and trustworthiness. As a control we compare the behavior of CEOs with the behavior of students. We find that CEOs are considerably more trusting and exhibit more trustworthiness than students - thus reaching substantially higher efficiency levels than students. Moreover, we find that, for CEOs as well as for students, incentives based on explicit threats to penalize shirking backfire by inducing less trustworthy behavior - giving rise to hidden costs of incentives. However, the availability of penalizing incentives also creates hidden returns: if a principal expresses trust by voluntarily refraining from implementing the punishment threat, the agent exhibits significantly more trustworthiness than if the punishment threat is not available. Thus trust seems to reinforce trustworthy behavior. Overall, trustworthiness is highest if the threat to punish is available but not used, while it is lowest if the threat to punish is used. Paradoxically, however, most CEOs and students use the punishment threat, although CEOs use it significantly less.

    Supply chain coordination with information sharing in the presence of trust and trustworthiness: A behavioral model

    Get PDF
    The strategic use of private information causes efficiency losses in traditional principal-agent settings. One stream of research states that these efficiency losses cannot be overcome if all agents use their private information strategically. Yet, another stream of research highlights the importance of communication, trust and trustworthiness in supply chain management. The underlying work links the concepts of communication, trust and trustworthiness to a traditional principal-agent setting in a supply chain environment. Surprisingly, it can be shown that communication and trust can actually lead to increasing efficiency losses although there is a substantial level of trustworthiness

    Facing the Artificial: Understanding Affinity, Trustworthiness, and Preference for More Realistic Digital Humans

    Get PDF
    In recent years, companies have been developing more realistic looking human faces for digital, virtual agents controlled by artificial intelligence (AI). But how do users feel about interacting with such virtual agents? We used a controlled lab experiment to examine users’ perceived trustworthiness, affinity, and preference towards a real human travel agent appearing via video (i.e., Skype) as well as in the form of a very human-realistic avatar; half of the participants were (deceptively) told the avatar was a virtual agent controlled by AI while the other half were told the avatar was controlled by the same human travel agent. Results show that participants rated the video human agent more trustworthy, had more affinity for him, and preferred him to both avatar versions. Users who believed the avatar was a virtual agent controlled by AI reported the same level of affinity, trustworthiness, and preferences towards the agent as those who believed it was controlled by a human. Thus, use of a realistic digital avatar lowered affinity, trustworthiness, and preferences, but how the avatar was controlled (by human or machine) had no effect. The conclusion is that improved visual fidelity alone makes a significant positive difference and that users are not averse to advanced AI simulating human presence, some may even be anticipating such an advanced technology
    • 

    corecore