282 research outputs found

    The Complexity of Planning Revisited - A Parameterized Analysis

    Full text link
    The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (Baeckstroem and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact.Comment: (author's self-archived copy

    The power of Sherali-Adams relaxations for general-valued CSPs

    Full text link
    We give a precise algebraic characterisation of the power of Sherali-Adams relaxations for solvability of valued constraint satisfaction problems to optimality. The condition is that of bounded width which has already been shown to capture the power of local consistency methods for decision CSPs and the power of semidefinite programming for robust approximation of CSPs. Our characterisation has several algorithmic and complexity consequences. On the algorithmic side, we show that several novel and many known valued constraint languages are tractable via the third level of the Sherali-Adams relaxation. For the known languages, this is a significantly simpler algorithm than the previously obtained ones. On the complexity side, we obtain a dichotomy theorem for valued constraint languages that can express an injective unary function. This implies a simple proof of the dichotomy theorem for conservative valued constraint languages established by Kolmogorov and Zivny [JACM'13], and also a dichotomy theorem for the exact solvability of Minimum-Solution problems. These are generalisations of Minimum-Ones problems to arbitrary finite domains. Our result improves on several previous classifications by Khanna et al. [SICOMP'00], Jonsson et al. [SICOMP'08], and Uppman [ICALP'13].Comment: Full version of an ICALP'15 paper (arXiv:1502.05301

    Consistent Probabilistic Social Choice

    Full text link
    Two fundamental axioms in social choice theory are consistency with respect to a variable electorate and consistency with respect to components of similar alternatives. In the context of traditional non-probabilistic social choice, these axioms are incompatible with each other. We show that in the context of probabilistic social choice, these axioms uniquely characterize a function proposed by Fishburn (Rev. Econ. Stud., 51(4), 683--692, 1984). Fishburn's function returns so-called maximal lotteries, i.e., lotteries that correspond to optimal mixed strategies of the underlying plurality game. Maximal lotteries are guaranteed to exist due to von Neumann's Minimax Theorem, are almost always unique, and can be efficiently computed using linear programming

    Complexity Theory, Game Theory, and Economics: The Barbados Lectures

    Full text link
    This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics," taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory; and (ii) to illustrate how game-theoretic questions have led to new and interesting complexity theory, including recent several breakthroughs. It consists of two five-lecture sequences: the Solar Lectures, focusing on the communication and computational complexity of computing equilibria; and the Lunar Lectures, focusing on applications of complexity theory in game theory and economics. No background in game theory is assumed.Comment: Revised v2 from December 2019 corrects some errors in and adds some recent citations to v1 Revised v3 corrects a few typos in v

    Analysis of Linkage-Friendly Genetic Algorithms

    Get PDF
    Evolutionary algorithms (EAs) are stochastic population-based algorithms inspired by the natural processes of selection, mutation, and recombination. EAs are often employed as optimum seeking techniques. A formal framework for EAs is proposed, in which evolutionary operators are viewed as mappings from parameter spaces to spaces of random functions. Formal definitions within this framework capture the distinguishing characteristics of the classes of recombination, mutation, and selection operators. EAs which use strictly invariant selection operators and order invariant representation schemes comprise the class of linkage-friendly genetic algorithms (lfGAs). Fast messy genetic algorithms (fmGAs) are lfGAs which use binary tournament selection (BTS) with thresholding, periodic filtering of a fixed number of randomly selected genes from each individual, and generalized single-point crossover. Probabilistic variants of thresholding and filtering are proposed. EAs using the probabilistic operators are generalized fmGAs (gfmGAs). A dynamical systems model of lfGAs is developed which permits prediction of expected effectiveness. BTS with probabilistic thresholding is modeled at various levels of abstraction as a Markov chain. Transitions at the most detailed level involve decisions between classes of individuals. The probability of correct decision making is related to appropriate maximal order statistics, the distributions of which are obtained. Existing filtering models are extended to include probabilistic individual lengths. Sensitivity of lfGA effectiveness to exogenous parameters limits practical applications. The lfGA parameter selection problem is formally posed as a constrained optimization problem in which the cost functional is related to expected effectiveness. Kuhn-Tucker conditions for the optimality of gfmGA parameters are derived
    corecore