17,972 research outputs found

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    Repage: REPutation and ImAGE Among Limited Autonomous Partners

    Get PDF
    This paper introduces Repage, a computational system that adopts a cognitive theory of reputation. We propose a fundamental difference between image and reputation, which suggests a way out from the paradox of sociality, i.e. the trade-off between agents' autonomy and their need to adapt to social environment. On one hand, agents are autonomous if they select partners based on their social evaluations (images). On the other, they need to update evaluations by taking into account others'. Hence, social evaluations must circulate and be represented as "reported evaluations" (reputation), before and in order for agents to decide whether to accept them or not. To represent this level of cognitive detail in artificial agents' design, there is a need for a specialised subsystem, which we are in the course of developing for the public domain. In the paper, after a short presentation of the cognitive theory of reputation and its motivations, we describe the implementation of Repage.Reputation, Agent Systems, Cognitive Design, Fuzzy Evaluation

    From Manifesta to Krypta: The Relevance of Categories for Trusting Others

    No full text
    In this paper we consider the special abilities needed by agents for assessing trust based on inference and reasoning. We analyze the case in which it is possible to infer trust towards unknown counterparts by reasoning on abstract classes or categories of agents shaped in a concrete application domain. We present a scenario of interacting agents providing a computational model implementing different strategies to assess trust. Assuming a medical domain, categories, including both competencies and dispositions of possible trustees, are exploited to infer trust towards possibly unknown counterparts. The proposed approach for the cognitive assessment of trust relies on agents' abilities to analyze heterogeneous information sources along different dimensions. Trust is inferred based on specific observable properties (Manifesta), namely explicitly readable signals indicating internal features (Krypta) regulating agents' behavior and effectiveness on specific tasks. Simulative experiments evaluate the performance of trusting agents adopting different strategies to delegate tasks to possibly unknown trustees, while experimental results show the relevance of this kind of cognitive ability in the case of open Multi Agent Systems

    Asymptotically idempotent aggregation operators for trust management in multi-agent systems

    Get PDF
    The study of trust management in multi-agent system, especially distributed, has grown over the last years. Trust is a complex subject that has no general consensus in literature, but has emerged the importance of reasoning about it computationally. Reputation systems takes into consideration the history of an entity’s actions/behavior in order to compute trust, collecting and aggregating ratings from members in a community. In this scenario the aggregation problem becomes fundamental, in particular depending on the environment. In this paper we describe a technique based on a class of asymptotically idempotent aggregation operators, suitable particulary for distributed anonymous environments

    Reasoning with Categories for Trusting Strangers: a Cognitive Architecture

    No full text
    A crucial issue for agents in open systems is the ability to filter out information sources in order to build an image of their counterparts, upon which a subjective evaluation of trust as a promoter of interactions can be assessed. While typical solutions discern relevant information sources by relying on previous experiences or reputational images, this work presents an alternative approach based on the cognitive ability to: (i) analyze heterogeneous information sources along different dimensions; (ii) ascribe qualities to unknown counterparts based on reasoning over abstract classes or categories; and, (iii) learn a series of emergent relationships between particular properties observable on other agents and their effective abilities to fulfill tasks. A computational architecture is presented allowing cognitive agents to dynamically assess trust based on a limited set of observable properties, namely explicitly readable signals (Manifesta) through which it is possible to infer hidden properties and capabilities (Krypta), which finally regulate agents' behavior in concrete work environments. Experimental evaluation discusses the effectiveness of trustor agents adopting different strategies to delegate tasks based on categorization

    Introducing fuzzy trust for managing belief conflict over semantic web data

    Get PDF
    Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model

    Trust and Risk Relationship Analysis on a Workflow Basis: A Use Case

    Get PDF
    Trust and risk are often seen in proportion to each other; as such, high trust may induce low risk and vice versa. However, recent research argues that trust and risk relationship is implicit rather than proportional. Considering that trust and risk are implicit, this paper proposes for the first time a novel approach to view trust and risk on a basis of a W3C PROV provenance data model applied in a healthcare domain. We argue that high trust in healthcare domain can be placed in data despite of its high risk, and low trust data can have low risk depending on data quality attributes and its provenance. This is demonstrated by our trust and risk models applied to the BII case study data. The proposed theoretical approach first calculates risk values at each workflow step considering PROV concepts and second, aggregates the final risk score for the whole provenance chain. Different from risk model, trust of a workflow is derived by applying DS/AHP method. The results prove our assumption that trust and risk relationship is implicit

    On the Simulation of Global Reputation Systems

    Get PDF
    Reputation systems evolve as a mechanism to build trust in virtual communities. In this paper we evaluate different metrics for computing reputation in multi-agent systems. We present a formal model for describing metrics in reputation systems and show how different well-known global reputation metrics are expressed by it. Based on the model a generic simulation framework for reputation metrics was implemented. We used our simulation framework to compare different global reputation systems to find their strengths and weaknesses. The strength of a metric is measured by its resistance against different threat-models, i.e. different types of hostile agents. Based on our results we propose a new metric for reputation systems.Reputation System, Trust, Formalization, Simulation
    • 

    corecore