76,874 research outputs found

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    A logic of negative trust

    Get PDF
    We present a logic to model the behaviour of an agent trusting or not trusting messages sent by another agent. The logic formalises trust as a consistency checking function with respect to currently available information. Negative trust is modelled in two forms: distrust, as the rejection of incoming inconsistent information; mistrust, as revision of previously held information becoming undesirable in view of new incoming inconsistent information, which the agent wishes to accept. We provide a natural deduction calculus, a relational semantics and prove soundness and completeness results. We overview a number of applications which have been investigated for the proof-theoretical formulation of the logic

    The Effect of Biased Communications On Both Trusting and Suspicious Voters

    Full text link
    In recent studies of political decision-making, apparently anomalous behavior has been observed on the part of voters, in which negative information about a candidate strengthens, rather than weakens, a prior positive opinion about the candidate. This behavior appears to run counter to rational models of decision making, and it is sometimes interpreted as evidence of non-rational "motivated reasoning". We consider scenarios in which this effect arises in a model of rational decision making which includes the possibility of deceptive information. In particular, we will consider a model in which there are two classes of voters, which we will call trusting voters and suspicious voters, and two types of information sources, which we will call unbiased sources and biased sources. In our model, new data about a candidate can be efficiently incorporated by a trusting voter, and anomalous updates are impossible; however, anomalous updates can be made by suspicious voters, if the information source mistakenly plans for an audience of trusting voters, and if the partisan goals of the information source are known by the suspicious voter to be "opposite" to his own. Our model is based on a formalism introduced by the artificial intelligence community called "multi-agent influence diagrams", which generalize Bayesian networks to settings involving multiple agents with distinct goals

    From Manifesta to Krypta: The Relevance of Categories for Trusting Others

    No full text
    In this paper we consider the special abilities needed by agents for assessing trust based on inference and reasoning. We analyze the case in which it is possible to infer trust towards unknown counterparts by reasoning on abstract classes or categories of agents shaped in a concrete application domain. We present a scenario of interacting agents providing a computational model implementing different strategies to assess trust. Assuming a medical domain, categories, including both competencies and dispositions of possible trustees, are exploited to infer trust towards possibly unknown counterparts. The proposed approach for the cognitive assessment of trust relies on agents' abilities to analyze heterogeneous information sources along different dimensions. Trust is inferred based on specific observable properties (Manifesta), namely explicitly readable signals indicating internal features (Krypta) regulating agents' behavior and effectiveness on specific tasks. Simulative experiments evaluate the performance of trusting agents adopting different strategies to delegate tasks to possibly unknown trustees, while experimental results show the relevance of this kind of cognitive ability in the case of open Multi Agent Systems

    Local and Global Trust Based on the Concept of Promises

    Get PDF
    We use the notion of a promise to define local trust between agents possessing autonomous decision-making. An agent is trustworthy if it is expected that it will keep a promise. This definition satisfies most commonplace meanings of trust. Reputation is then an estimation of this expectation value that is passed on from agent to agent. Our definition distinguishes types of trust, for different behaviours, and decouples the concept of agent reliability from the behaviour on which the judgement is based. We show, however, that trust is fundamentally heuristic, as it provides insufficient information for agents to make a rational judgement. A global trustworthiness, or community trust can be defined by a proportional, self-consistent voting process, as a weighted eigenvector-centrality function of the promise theoretical graph

    Trust and corruption: escalating social practices?

    Get PDF
    Escalating social practices spread dynamically, as they take hold. They are selffulfilling and contagious. This article examines two central social practices, trust and corruption, which may be characterized as alternative economic lubricants. Corruption can be a considerable instrument of flexibility while trust may be an alternative to vigilance (or a collective regime of sanctions). Rational equilibrium explanations and psychological accounts of trust and corruption are rejected in favour of a model open to multiple feed-backs. Although there can be too much trust and too little corruption, and (unsurprisingly) too little trust and too much corruption, a state is unattainable in which these forces are in balance. Practices of trust alone can form stable equilibria, but it is claimed that such states are undesirable for economic and moral reasons. By contrast, practices of corruption are inherently unstable. Implications for strategies of control in organizational relations are drawn

    Micro-bias and macro-performance

    Full text link
    We use agent-based modeling to investigate the effect of conservatism and partisanship on the efficiency with which large populations solve the density classification task--a paradigmatic problem for information aggregation and consensus building. We find that conservative agents enhance the populations' ability to efficiently solve the density classification task despite large levels of noise in the system. In contrast, we find that the presence of even a small fraction of partisans holding the minority position will result in deadlock or a consensus on an incorrect answer. Our results provide a possible explanation for the emergence of conservatism and suggest that even low levels of partisanship can lead to significant social costs.Comment: 11 pages, 5 figure

    Climate Change, Cooperation, and Moral Bioenhancement

    Get PDF
    The human faculty of moral judgment is not well suited to address problems, like climate change, that are global in scope and remote in time. Advocates of ‘moral bioenhancement’ have proposed that we should investigate the use of medical technologies to make human beings more trusting and altruistic, and hence more willing to cooperate in efforts to mitigate the impacts of climate change. We survey recent accounts of the proximate and ultimate causes of human cooperation in order to assess the prospects for bioenhancement. We identify a number of issues that are likely to be significant obstacles to effective bioenhancement, as well as areas for future research

    Privacy, security, and trust issues in smart environments

    Get PDF
    Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning

    Promissory Estoppel and the Protection of Interpersonal Trust

    Get PDF
    This paper examines the role of trust in promissory estoppel and the extent to which the law should protect trust when a promise is made. Part II of this Article summarizes some of the scholarship discussing the nature and role of trust. In particular, it discusses the role of trust in a market economy, and the related role of trust in Contracts law. Part III examines whether there is a difference between trust and reliance, and whether it matters. Part III further asserts that a separate discussion of trust is beneficial because it has the potential to guide and inform internal decision-making in a way that is not possible by simply focusing on outward reliance. Part IV of this Article discusses the role of trust in the doctrine of promissory estoppel. Part V sets forth why the law should promote an optimal level of trust, as opposed to a maximum protection of trust no matter what. It discusses the need for promisees to exercise self-reliance and self-protection in order to avoid overreliance. Part VI identifies the types of cases where trust should be protected. Such cases include ones where the promisee is engaged in a transaction that she cannot avoid, where she has no control over the structure of the transaction, and where she has no choice but to trust the promisor (or more accurately, trust the legal system to enforce the promise). Part VII presents the polar end of the spectrum where trust should not be protected. Part VIII concludes the Article
    • 

    corecore