886 research outputs found

    Selfishness Level Induces Cooperation in Sequential Social Dilemmas

    Get PDF
    A key contributor to the success of modern societies is humanity’s innate ability to meaningfully cooperate. Modern game-theoretic reasoning shows however, that an individual’s amenity to cooperation is directly linked with the mechanics of the scenario at hand. Social dilemmas constitute a subset of particularly thorny such scenarios, typically modelled as normal-form or sequential games, where players are caught in a dichotomy between the decision to cooperate with teammates or to defect, and further their own goals. In this work, we study such social dilemmas through the lens of ’selfishness level’, a standard game-theoretic metric which quantifies the extent to which a game’s payoffs incentivize defective behaviours.The selfishness level is significant in this context as it doubles as a prescriptive notion, describing the exact payoff modifications necessary to induce players with prosocial preferences. Using this framework, we are able to derive conditions, and means, under which normal-form social dilemmas can be resolved. We also produce a first-step towards extending this metric to Markov-game or sequential social dilemmas with the aim of quantitatively measuring the magnitude to which such environments incentivize selfish behaviours. Finally, we present an exploratory empirical analysis showing the positive effects of using a selfishness level directed reward shaping scheme in such environments

    Contextual and Possibilistic Reasoning for Coalition Formation

    Get PDF
    In multiagent systems, agents often have to rely on other agents to reach their goals, for example when they lack a needed resource or do not have the capability to perform a required action. Agents therefore need to cooperate. Then, some of the questions raised are: Which agent(s) to cooperate with? What are the potential coalitions in which agents can achieve their goals? As the number of possibilities is potentially quite large, how to automate the process? And then, how to select the most appropriate coalition, taking into account the uncertainty in the agents' abilities to carry out certain tasks? In this article, we address the question of how to find and evaluate coalitions among agents in multiagent systems using MCS tools, while taking into consideration the uncertainty around the agents' actions. Our methodology is the following: We first compute the solution space for the formation of coalitions using a contextual reasoning approach. Second, we model agents as contexts in Multi-Context Systems (MCS), and dependence relations among agents seeking to achieve their goals, as bridge rules. Third, we systematically compute all potential coalitions using algorithms for MCS equilibria, and given a set of functional and non-functional requirements, we propose ways to select the best solutions. Finally, in order to handle the uncertainty in the agents' actions, we extend our approach with features of possibilistic reasoning. We illustrate our approach with an example from robotics

    Trust-Based Mechanisms for Robust and Efficient Task Allocation in the Presence of Execution Uncertainty

    Get PDF
    Vickrey-Clarke-Groves (VCG) mechanisms are often used to allocate tasks to selfish and rational agents. VCG mechanisms are incentive-compatible, direct mechanisms that are efficient (i.e. maximise social utility) and individually rational (i.e. agents prefer to join rather than opt out). However, an important assumption of these mechanisms is that the agents will always successfully complete their allocated tasks. Clearly, this assumption is unrealistic in many real-world applications where agents can, and often do, fail in their endeavours. Moreover, whether an agent is deemed to have failed may be perceived differently by different agents. Such subjective perceptions about an agent’s probability of succeeding at a given task are often captured and reasoned about using the notion of trust. Given this background, in this paper, we investigate the design of novel mechanisms that take into account the trust between agents when allocating tasks. Specifically, we develop a new class of mechanisms, called trust-based mechanisms, that can take into account multiple subjective measures of the probability of an agent succeeding at a given task and produce allocations that maximise social utility, whilst ensuring that no agent obtains a negative utility. We then show that such mechanisms pose a challenging new combinatorial optimisation problem (that is NP-complete), devise a novel representation for solving the problem, and develop an effective integer programming solution (that can solve instances with about 2×105 possible allocations in 40 seconds).

    A Multi-Agent Approach for Designing Next Generation of Air Traffic Systems

    Get PDF
    This work was funded by Spanish Ministry of Economy and Competitiveness under grant TEC2011-28626 C01-C02, and by the Government of Madrid under grant S2009/TIC-1485 (CONTEXTS)

    Cooperative transportation scheduling : an application domain for DAI

    Get PDF
    A multiagent approach to designing the transportation domain is presented. The MARS system is described which models cooperative order scheduling within a society of shipping companies. We argue why Distributed Artificial Intelligence (DAI) offers suitable tools to deal with the hard problems in this domain. We present three important instances for DAI techniques that proved useful in the transportation application: cooperation among the agents, task decomposition and task allocation, and decentralised planning. An extension of the contract net protocol for task decomposition and task allocation is presented; we show that it can be used to obtain good initial solutions for complex resource allocation problems. By introducing global information based upon auction protocols, this initial solution can be improved significantly. We demonstrate that the auction mechanism used for schedule optimisation can also be used for implementing dynamic replanning. Experimental results are provided evaluating the performance of different scheduling strategies

    Towards Construction of Creative Collaborative Teams Using Multiagent Systems

    Get PDF
    Group creativity and innovation are of chief importance for both collaborative learning and collaborative working, as increasing the efficiency and effectiveness of groups of individuals performing together specific activities to achieve common goals, in given contexts, is of crucial importance nowadays. Nevertheless, construction of “the most” creative and innovative groups given a cohort of people and a set of common goals and tasks to perform is challenging. We present here our method for semi-automatic construction of “the most” creative and innovative teams given a group of persons and a particular goal, which is based on unsupervised learning and it is supported by a multiagent system. Individual creativity and motivation are both factors influencing group creativity used in the experiments performed with our Computer Science students. However, the method is general and can be used for building the most creative and innovative groups in any collaborative situation

    Grounding Artificial Intelligence in the Origins of Human Behavior

    Full text link
    Recent advances in Artificial Intelligence (AI) have revived the quest for agents able to acquire an open-ended repertoire of skills. However, although this ability is fundamentally related to the characteristics of human intelligence, research in this field rarely considers the processes that may have guided the emergence of complex cognitive capacities during the evolution of the species. Research in Human Behavioral Ecology (HBE) seeks to understand how the behaviors characterizing human nature can be conceived as adaptive responses to major changes in the structure of our ecological niche. In this paper, we propose a framework highlighting the role of environmental complexity in open-ended skill acquisition, grounded in major hypotheses from HBE and recent contributions in Reinforcement learning (RL). We use this framework to highlight fundamental links between the two disciplines, as well as to identify feedback loops that bootstrap ecological complexity and create promising research directions for AI researchers
    corecore