Distributed Learning Dynamics for Coalitional Games

Abstract

In the framework of transferable utility coalitional games, a scoring (characteristic) function determines the value of any subset/coalition of agents. Agents decide on both which coalitions to form and the allocations of the values of the formed coalitions among their members. An important concept in coalitional games is that of a core solution, which is a partitioning of agents into coalitions and an associated allocation to each agent under which no group of agents can get a higher allocation by forming an alternative coalition. We present distributed learning dynamics for coalitional games that converge to a core solution whenever one exists. In these dynamics, an agent maintains a state consisting of (i) an aspiration level for its allocation and (ii) the coalition, if any, to which it belongs. In each stage, a randomly activated agent proposes to form a new coalition and changes its aspiration based on the success or failure of its proposal. The coalition membership structure is changed, accordingly, whenever the proposal succeeds. Required communications are that: (i) agents in the proposed new coalition need to reveal their current aspirations to the proposing agent, and (ii) agents are informed if they are joining the proposed coalition or if their existing coalition is broken. The proposing agent computes the feasibility of forming the coalition. We show that the dynamics hit an absorbing state whenever a core solution is reached. We further illustrate the distributed learning dynamics on a multi-agent task allocation setting.Comment: 8 pages, 4 figures; accepted for CDC 202

    Similar works

    Full text

    thumbnail-image

    Available Versions