23 research outputs found

    Reputation-based decisions for logic-based cognitive agents

    Get PDF
    Computational trust and reputation models have been recognized as one of the key technologies required to design and implement agent systems. These models manage and aggregate the information needed by agents to efficiently perform partner selection in uncertain situations. For simple applications, a game theoretical approach similar to that used in most models can suffice. However, if we want to undertake problems found in socially complex virtual societies, we need more sophisticated trust and reputation systems. In this context, reputation-based decisions that agents make take on special relevance and can be as important as the reputation model itself. In this paper, we propose a possible integration of a cognitive reputation model, Repage, into a cognitive BDI agent. First, we specify a belief logic capable to capture the semantics of Repage information, which encodes probabilities. This logic is defined by means of a two first-order languages hierarchy, allowing the specification of axioms as first-order theories. The belief logic integrates the information coming from Repage in terms if image and reputation, and combines them, defining a typology of agents depending of such combination. We use this logic to build a complete graded BDI model specified as a multi-context system where beliefs, desires, intentions and plans interact among each other to perform a BDI reasoning. We conclude the paper with an example and a related work section that compares our approach with current state-of-the-art models. © 2010 The Author(s).This work was supported by the projects AEI (TIN2006-15662-C02-01), AT (CONSOLIDER CSD20070022, INGENIO 2010), LiquidPub (STREP FP7-213360), RepBDI (Intramural 200850I136) and by the Generalitat de Catalunya under the grant 2005-SGR-00093.Peer Reviewe

    El bagging en casos no supervisats: Implementació a GESCONDA per algoritmes de clustering

    Get PDF
    Els algorismes de clustering per entorns no supervisats que es basen en una inicialització aleatòria (p. Ex.: tria inicial de llavors en l’algorisme Kmeans), presenten un problema a l’hora d’obtenir solucions fiables. Una solució per eliminar aquest factor d’aleatorietat seria emprar altres tècniques d’inicialització. Però com es veurà posteriorment en l’article, aquestes tècniques tenen una altre problemàtica, i és la de trobar solucions òptimes locals o solucions esbiaixades. La solució que es proposa és la utilització de la tècnica de bagging que s’usa en entorns supervisats, i que a través de la unió de diversos resultats de classificació respecte unes mateixes dades, permet obtenir particions òptimes. Així mateix, es va implementar tres formes de dur a terme el bagging segons la forma de seleccionar la classificació de referència a partir de la qual s’uneixen la resta de classificacions. Aquestes tres tècniques són: agafant la primera classificació, triant la que presenta una major inèrcia (relació variança entre-classes i intra-classes) i triant la que aporta una major informació (mitjançant el càlcul d’Informació Mútua de Shannon). Finalment es van provar les tècniques d’inèrcia i informació mútua amb dades ambientals reals preses d’una depuradora d’aigües residuals, per tal de comprovar l’efectivitat dels resultats respecte al mètode tradicional. Totes les implementacions i proves es van dur a terme sobre el Sistema Intel·ligent d’Anàlisi de Dades GESCONDA, el qual es descriurà en el pròxim apartat. L’estudi finalitza amb una breu discussió dels resultats obtinguts i unes conclusions sobre el treball realitzat.Postprint (published version

    Reputació i confiança modelades

    Get PDF
    Tots sabem que la reputació i la confiança són elements essencials per a què la nostra societat funcioni. A més, també ho són per a què les noves societats virtuals puguin funcionar. Investigadors de la UAB han desenvolupat un model computacional de reputació cognitiva que permet utilitzar unitats anomenades agents amb creences, desitjos i intencions per a prendre decisions basades en la reputació i confiança en altres agents. Així, reputació i confiança poden ser analitzades en profunditat.Todos sabemos que la reputación y la confianza son elementos esenciales para que nuestra sociedad funcione. Además, también lo son para que las nuevas sociedades virtuales puedan funcionar. Investigadores de la UAB han desarrollado un modelo computacional de reputación cognitiva que permite utilizar unidades llamadas agentes con creencias, deseos e intenciones para tomar decisiones basadas en la reputación y confianza en otros agentes. Así, reputación y confianza pueden ser analizadas en profundidad.We all know the reputation and trust are essential elements for which our society works. Also, are crucial in the new virtual societies. UAB researchers have developed a computational model of cognitive reputation that uses units called agents with beliefs, desires and intentions to make decisions based on reputation and trust in other agents. Thus, reputation and trust can be analyzed in depth

    Computational trust and reputation models for open multi-agent systems: a review

    No full text
    In open environments, agents depend on reputation and trust mechanisms to evaluate the behavior of potential partners. The scientific research in this field has considerably increased, and in fact, reputation and trust mechanisms have been already considered a key elements in the design of multi-agent systems. In this paper we provide a survey that, far from being exhaustive, intends to show the most representative models that currently exist in the literature. For this enterprise we consider several dimensions of analysis that appeared in three existing surveys, and provide new dimensions that can be complementary to the existing ones and that have not been treated directly. Moreover, besides showing the original classification that each one of the surveys provide, we also classify models that where not taken into account by the original surveys. The paper illustrates the proliferation in the past few years of models that follow a more cognitive approach, in which trust and reputation representation as mental attitudes is as important as the final values of trust and reputation. Furthermore, we provide an objective definition of trust, based on Castelfranchi's idea that trust implies a decision to rely on someone. © 2011 Springer Science+Business Media B.V.This work was supported by the EC by the project LiquidPub (STREP FP7-213360), by the Spanish Education and Science Ministry with the projects AEI (TIN2006-15662-C02-01), AT (CONSOLIDER CSD2007-0022, INGENIO 2010) and RepBDI (Intramural 200850I136), and by the Generalitat de Catalunya under the grants 2009-SGR-1433 and 2009-SGR-1434.Peer Reviewe

    Arguing about social evaluations: From theory to experimentation

    No full text
    In open multiagent systems, agents depend on reputation and trust mechanisms to evaluate the behavior of potential partners. Often these evaluations are associated with a measure of reliability that the source agent computes. However, due to the subjectivity of reputation-related information, this can lead to serious problems when considering communicated social evaluations. In this paper, instead of considering only reliability measures computed from the sources, we provide a mechanism that allows the recipient decide whether the piece of information is reliable according to its own knowledge. We do it by allowing the agents engage in an argumentation-based dialog specifically designed for the exchange of social evaluations. We evaluate our framework through simulations. The results show that in most of the checked conditions, agents that use our dialog framework significantly improve (statistically) the accuracy of the evaluations, over the agents that do not use it. In particular, the simulations reveal that when there is a heterogeneity set of agents (not all the agents have the same goals) and agents base part of their inferences on third-party information, it is worth using our dialog protocol. © 2013 Elsevier Inc. All rights reserved.Peer Reviewe

    Arguing about reputation. The LRep language

    No full text
    Abstract. In the field of multiagent systems (MAS), the computational models of trust and reputation have attracted increasing interest since electronic and open environments became a reality. In virtual societies of human actors very well-known mechanisms are already used to control non normative agents, for instance, the eBay scoring system. In virtual societies of artificial and autonomous agents, the same necessity arises, and several computational trust and reputation models have appeared in literature to cover this necessity. Typically, these models provide evaluations of agents ’ performance in a specific context, taking into account direct experiences and third party information. This last source of information is the communication of agents ’ own opinions. When dealing with cognitive agents endowed with complex reasoning mechanisms, we would like that these opinions could be justified in a way such that the resulting information was more complete and reliable. In this paper we present LRep, a language based on an existing ontology of reputation that allows building justifications of communicated social evaluations.

    An argumentation-based protocol for social evaluations exchange

    No full text
    In open multiagent systems, agents depend on reputation and trust mechanisms to evaluate the behavior of potential partners. Often these evaluations (social evaluate) are associated with a measure of reliability that the source agent computes. When considering communicated social evaluations, this may lead to serious problems due to the subjectivity of reputation-related information. In this paper, instead of considering only reliability measures computed from the sources, we provide a mechanism that allows the recipient according to its own knowledge, decide whether the piece of information is reliable. We do this by allowing the agents engage in an argumentation-based dialogThis work was supported by the projects AT (CONSOLIDER CSD2007-0022, INGENIO 2010), LiquidPub (STREP FP7-213360), RepBDI (Intramural 200850I136) and partially supported by the Generalitat de Catalunya under the grant 2009-SGR-1434
    corecore