425 research outputs found

    Complexity of Determining Nonemptiness of the Core

    Get PDF
    Coalition formation is a key problem in automated negotiation among self-interested agents, and other multiagent applications. A coalition of agents can sometimes accomplish things that the individual agents cannot, or can do things more efficiently. However, motivating the agents to abide to a solution requires careful analysis: only some of the solutions are stable in the sense that no group of agents is motivated to break off and form a new coalition. This constraint has been studied extensively in cooperative game theory. However, the computational questions around this constraint have received less attention. When it comes to coalition formation among software agents (that represent real-world parties), these questions become increasingly explicit. In this paper we define a concise general representation for games in characteristic form that relies on superadditivity, and show that it allows for efficient checking of whether a given outcome is in the core. We then show that determining whether the core is nonempty is NP\mathcal{NP}-complete both with and without transferable utility. We demonstrate that what makes the problem hard in both cases is determining the collaborative possibilities (the set of outcomes possible for the grand coalition), by showing that if these are given, the problem becomes tractable in both cases. However, we then demonstrate that for a hybrid version of the problem, where utility transfer is possible only within the grand coalition, the problem remains NP\mathcal{NP}-complete even when the collaborative possibilities are given

    Definition and Complexity of Some Basic Metareasoning Problems

    Full text link
    In most real-world settings, due to limited time or other resources, an agent cannot perform all potentially useful deliberation and information gathering actions. This leads to the metareasoning problem of selecting such actions. Decision-theoretic methods for metareasoning have been studied in AI, but there are few theoretical results on the complexity of metareasoning. We derive hardness results for three settings which most real metareasoning systems would have to encompass as special cases. In the first, the agent has to decide how to allocate its deliberation time across anytime algorithms running on different problem instances. We show this to be NP\mathcal{NP}-complete. In the second, the agent has to (dynamically) allocate its deliberation or information gathering resources across multiple actions that it has to choose among. We show this to be NP\mathcal{NP}-hard even when evaluating each individual action is extremely simple. In the third, the agent has to (dynamically) choose a limited number of deliberation or information gathering actions to disambiguate the state of the world. We show that this is NP\mathcal{NP}-hard under a natural restriction, and PSPACE\mathcal{PSPACE}-hard in general

    AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents

    Full text link
    A satisfactory multiagent learning algorithm should, {\em at a minimum}, learn to play optimally against stationary opponents and converge to a Nash equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action repeated games--assuming that the opponent's (mixed) strategy is observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have these two properties in {\em all} repeated (finite) games. It requires only that the other players' actual actions (not their strategies) can be observed at each step. It also learns to play optimally against opponents that {\em eventually become} stationary. The basic idea behind AWESOME ({\em Adapt When Everybody is Stationary, Otherwise Move to Equilibrium}) is to try to adapt to the others' strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing other multiagent learning algorithms also

    Imperfect-Recall Abstractions with Bounds in Games

    Full text link
    Imperfect-recall abstraction has emerged as the leading paradigm for practical large-scale equilibrium computation in incomplete-information games. However, imperfect-recall abstractions are poorly understood, and only weak algorithm-specific guarantees on solution quality are known. In this paper, we show the first general, algorithm-agnostic, solution quality guarantees for Nash equilibria and approximate self-trembling equilibria computed in imperfect-recall abstractions, when implemented in the original (perfect-recall) game. Our results are for a class of games that generalizes the only previously known class of imperfect-recall abstractions where any results had been obtained. Further, our analysis is tighter in two ways, each of which can lead to an exponential reduction in the solution quality error bound. We then show that for extensive-form games that satisfy certain properties, the problem of computing a bound-minimizing abstraction for a single level of the game reduces to a clustering problem, where the increase in our bound is the distance function. This reduction leads to the first imperfect-recall abstraction algorithm with solution quality bounds. We proceed to show a divide in the class of abstraction problems. If payoffs are at the same scale at all information sets considered for abstraction, the input forms a metric space. Conversely, if this condition is not satisfied, we show that the input does not form a metric space. Finally, we use these results to experimentally investigate the quality of our bound for single-level abstraction

    BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games

    Get PDF
    We present BL-WoLF, a framework for learnability in repeated zero-sum games where the cost of learning is measured by the losses the learning agent accrues (rather than the number of rounds). The game is adversarially chosen from some family that the learner knows. The opponent knows the game and the learner's learning strategy. The learner tries to either not accrue losses, or to quickly learn about the game so as to avoid future losses (this is consistent with the Win or Learn Fast (WoLF) principle; BL stands for ``bounded loss''). Our framework allows for both probabilistic and approximate learning. The resultant notion of {\em BL-WoLF}-learnability can be applied to any class of games, and allows us to measure the inherent disadvantage to a player that does not know which game in the class it is in. We present {\em guaranteed BL-WoLF-learnability} results for families of games with deterministic payoffs and families of games with stochastic payoffs. We demonstrate that these families are {\em guaranteed approximately BL-WoLF-learnable} with lower cost. We then demonstrate families of games (both stochastic and deterministic) that are not guaranteed BL-WoLF-learnable. We show that those families, nevertheless, are {\em BL-WoLF-learnable}. To prove these results, we use a key lemma which we derive
    • …
    corecore