14 research outputs found

    Revisiting MAB based approaches to recursive delegation

    Full text link
    In this paper we examine the effectiveness of several multi-arm bandit algorithms when used as a trust system to select agents to delegate tasks to. In contrast to existing work, we allow for recursive delegation to occur. That is, a task delegated to one agent can be delegated onwards by that agent, with further delegation possible until some agent finally executes the task. We show that modifications to the standard multi-arm bandit algorithms can provide improvements in performance in such recursive delegation settings

    Trust-based negotiation in multiagent systems: a systematic review

    Get PDF
    In this work, we conducted a systematic review on trust-based negotiation in Multiagent Systems (MAS), through a bibliometric analysis over the past 25 years of research publications, on three of the most popular scientific databases (Google Scholar, Scopus, and Web of Science). Our analysis reveals that this research topic is regaining interest, after some oscillating years, and the impact of its contributions is equivalent to other equally important research variants like ontology and argumentation (in a negotiation scenario). Discarding the human-to-agent trust challenges, we only focus on agent-to-agent trust concepts, and we performed an analysis of the different types of trust dimensions, using the findings and concerns of past review works, as we identify and select the dimensions that, in our opinion, have the most potential to lead the research advances on the topic of trust in MAS. Furthermore, we discuss the current challenges and open issues associated with those trust dimensions, and how current advancements in the literature could provide insights for the solution of those challenges, or even the finding of new research paths.This work has been supported by national funds through FCT – Fundação para a Ciência e Tecnologia (Portuguese Foundation for Science and Technology) through the Projects UIDB/04728/2020, UIDP/04728/2020, and the Ricardo Barbosa doctoral Grant with the reference UI/BD/154187/202

    Знаковый подход к задаче распределения ролей в коалиции когнитивных агентов

    Get PDF
    In this paper we consider the problem of the role distribution during the construction of a general plan of actions in the coalition of cognitive agents. Cognitive agents realize the basic functions of an intelligent agent using models of human cognitive functions. As a psychological basis for constructing models of cognitive functions, the theory of activity and the formalization of sign-based world model were used. The paper presents an original method for roles distribution - the MultiMAP algorithm, based on the sign-based method of agent’s behavior planning. The main features of the described approach are presented, including ways of representing the agent's knowledge of himself and other agents, methods of sign communication and preserving the experience of cooperation with other agents. Model experiments are described that demonstrate the main advantages of the approach presented and some of the shortcomings to be eliminated in future work.В настоящей работе рассмотрена задача распределения ролей при составлении общего плана действий в коалиции когнитивных агентов. Когнитивные агенты реализуют основные функции интеллектуального агента с использованием моделей когнитивных функций человека, к которым относятся применяемые в данной работе функции обучения концептуальным знаниям и планирования коллективного поведения. В работе представлен оригинальный метод распределения ролей — алгоритм MultiMAP, основанный на знаковом способе планирования поведения агента. Представлены основные особенности описываемого подхода, включающие способы представления знаний агента о себе и о других агентах, способы знаковой коммуникации и сохранения опыта кооперации с другими агентами. Описаны модельные эксперименты, демонстрирующие основные преимущества представленного подхода и некоторые недостатки, на устранение которых направлена будущая работа в данном направлении

    Cooperation and Reputation Dynamics with Reinforcement Learning

    Get PDF
    Creating incentives for cooperation is a challenge in natural and artificial systems. One potential answer is reputation, whereby agents trade the immediate cost of cooperation for the future benefits of having a good reputation. Game theoretical models have shown that specific social norms can make cooperation stable, but how agents can independently learn to establish effective reputation mechanisms on their own is less understood. We use a simple model of reinforcement learning to show that reputation mechanisms generate two coordination problems: agents need to learn how to coordinate on the meaning of existing reputations and collectively agree on a social norm to assign reputations to others based on their behavior. These coordination problems exhibit multiple equilibria, some of which effectively establish cooperation. When we train agents with a standard Q-learning algorithm in an environment with the presence of reputation mechanisms, convergence to undesirable equilibria is widespread. We propose two mechanisms to alleviate this: (i) seeding a proportion of the system with fixed agents that steer others towards good equilibria; and (ii), intrinsic rewards based on the idea of introspection, i.e., augmenting agents' rewards by an amount proportionate to the performance of their own strategy against themselves. A combination of these simple mechanisms is successful in stabilizing cooperation, even in a fully decentralized version of the problem where agents learn to use and assign reputations simultaneously. We show how our results relate to the literature in Evolutionary Game Theory, and discuss implications for artificial, human and hybrid systems, where reputations can be used as a way to establish trust and cooperation.Comment: Published in AAMAS'21, 9 page

    Towards a Model of Open and Reliable Cognitive Multiagent Systems: Dealing with Trust and Emotions

    Get PDF
     Open multiagent systems are those in which the agents can enter or leave the system freely. In these systems any entity with unknown intention can occupy the environment. For this scenario trust and reputation mechanisms should be used to choose partners in order to request services or delegate tasks. Trust and reputation models have been proposed in the Multiagent Systems area as a way to assist agents to select good partners in order to improve interactions between them. Most of the trust and reputation models proposed in the literature take into account their functional aspects, but not how they affect the reasoning cycle of the agent. That is, under the perspective of the agent, a trust model is usually just a “black box” and the agents usually does not take into account their emotional state to make decisions as well as humans often do. As well as trust, agent’s emotions also have been studied with the aim of making the actions and reactions of the agents more like those of humans being in order to imitate their reasoning and decision making mechanisms. In this paper we analyse some proposed models found in the literature and propose a BDI and multi-context based agent model which includes emotional reasoning to lead trust and reputation in open multiagent systems

    Tracking Uncertainty Propagation from Model to Formalization: Illustration on Trust Assessment

    Get PDF
    International audienceThis paper investigates the use of the URREF ontology to characterize and track uncertainties arising within the modeling and formalization phases. Estimation of trust in reported information, a real-world problem of interest to practitioners in the field of security, was adopted for illustration purposes. A functional model of trust was developed to describe the analysis of reported information, and it was implemented with belief functions. When assessing trust in reported information, the uncertainty arises not only from the quality of sources or information content, but also due to the inability of models to capture the complex chain of interactions leading to the final outcome and to constraints imposed by the representation formalism. A primary goal of this work is to separate known approximations, imperfections and inaccuracies from potential errors, while explicitly tracking the uncertainty from the modeling to the formalization phases. A secondary goal is to illustrate how criteria of the URREF ontology can offer a basis for analyzing performances of fusion systems at early stages, ahead of implementation. Ideally, since uncertainty analysis runs dynamically, it can use the existence or absence of observed states and processes inducing uncertainty to adjust the tradeoff between precision and performance of systems on-the-fly

    Toward Secure Trust and Reputation Systems for Electronic Marketplaces

    Get PDF
    In electronic marketplaces, buying and selling agents may be used to represent buyers and sellers respectively. When these marketplaces are large, repeated transactions between traders may be rare. This makes it difficult for buying agents to judge the reliability of selling agents, discouraging participation in the market. A variety of trust and reputation systems have been proposed to help traders to find trustworthy partners. Unfortunately, as our investigations reveal, there are a number of common vulnerabilities present in such models---security problems that may be exploited by `attackers' to cheat without detection/repercussions. Inspired by these findings, we set out to develop a model of trust with more robust security properties than existing proposals. Our Trunits model represents a fundamental re-conception of the notion of trust. Instead of viewing trust as a measure of predictability, Trunits considers trust to be a quality that one possesses. Trust is represented using abstract trust units, or `trunits', in much the same way that money represents quantities of value. Trunits flow in the course of transactions (again, similar to money); a trader's trunit balance determines if he is trustworthy for a given transaction. Faithful execution of a transaction results in a larger trunit balance, permitting the trader to engage in more transactions in the future---a built-in economic incentive for honesty. We present two mechanisms (sets of rules that govern the operation of the marketplace) based on this model: Basic Trunits, and an extension known as Commodity Trunits, in which trunits may be bought and sold. Seeking to precisely characterize the protection provided to market participants by our models, we develop a framework for security analysis of trust and reputation systems. Inspired by work in cryptography, our framework allows security guarantees to be developed for trust/reputation models--provable claims of the degree of protection provided, and the conditions under which such protection holds. We focus in particular on characterizing buyer security: the properties that must hold for buyers to feel secure from cheating sellers. Beyond developing security guarantees, this framework is an important research tool, helping to highlight limitations and deficiencies in models so that they may be targeted for future investigation. Application of this framework to Basic Trunits and Commodity Trunits reveals that both are able to deliver provable security to buyers

    Trust management techniques for the internet of things: A survey

    Get PDF
    A vision of the future Internet is introduced in such a fashion that various computing devices are connected together to form a network called Internet of Things (IoT). This network will generate massive data that may be leveraged for entertainment, security, and most importantly user trust. Yet, trust is an imperative obstruction that may hinder the IoT growth and even delay the substantial squeeze of a number of applications. In this survey, an extensive analysis of trust management techniques along with their pros and cons is presented in a different context. In comparison with other surveys, the goal is to provide a systematic description of the most relevant trust management techniques to help researchers understand that how various systems fit together to bring preferred functionalities without examining different standards. Besides, the lessons learned are presented, and the views are argued regarding the primary goal trust which is likely to play in the future Internet. 2018 IEEE.This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-01411, A Micro-Service IoTWare Framework Technology Development for Ultra small IoT Device).Scopus2-s2.0-8506427487
    corecore