79 research outputs found

    Fast Algorithms for Energy Games in Special Cases

    Full text link
    In this paper, we study algorithms for special cases of energy games, a class of turn-based games on graphs that show up in the quantitative analysis of reactive systems. In an energy game, the vertices of a weighted directed graph belong either to Alice or to Bob. A token is moved to a next vertex by the player controlling its current location, and its energy is changed by the weight of the edge. Given a fixed starting vertex and initial energy, Alice wins the game if the energy of the token remains nonnegative at every moment. If the energy goes below zero at some point, then Bob wins. The problem of determining the winner in an energy game lies in NP∩coNP\mathsf{NP} \cap \mathsf{coNP}. It is a long standing open problem whether a polynomial time algorithm for this problem exists. We devise new algorithms for three special cases of the problem. The first two results focus on the single-player version, where either Alice or Bob controls the whole game graph. We develop an O~(nωWω)\tilde{O}(n^\omega W^\omega) time algorithm for a game graph controlled by Alice, by providing a reduction to the All-Pairs Nonnegative Prefix Paths problem (APNP), where WW is the maximum weight and ω\omega is the best exponent for matrix multiplication. Thus we study the APNP problem separately, for which we develop an O~(nωWω)\tilde{O}(n^\omega W^\omega) time algorithm. For both problems, we improve over the state of the art of O~(mn)\tilde O(mn) for small WW. For the APNP problem, we also provide a conditional lower bound, which states that there is no O(n3−ϵ)O(n^{3-\epsilon}) time algorithm for any ϵ>0\epsilon > 0, unless the APSP Hypothesis fails. For a game graph controlled by Bob, we obtain a near-linear time algorithm. Regarding our third result, we present a variant of the value iteration algorithm, and we prove that it gives an O(mn)O(mn) time algorithm for game graphs without negative cycles

    Synchronization and Control of Quantitative Systems

    Get PDF

    Novel Hedonic Games and Stability Notions

    Get PDF
    We present here work on matching problems, namely hedonic games, also known as coalition formation games. We introduce two classes of hedonic games, Super Altruistic Hedonic Games (SAHGs) and Anchored Team Formation Games (ATFGs), and investigate the computational complexity of finding optimal partitions of agents into coalitions, or finding - or determining the existence of - stable coalition structures. We introduce a new stability notion for hedonic games and examine its relation to core and Nash stability for several classes of hedonic games

    The Impatient May Use Limited Optimism to Minimize Regret

    Full text link
    Discounted-sum games provide a formal model for the study of reinforcement learning, where the agent is enticed to get rewards early since later rewards are discounted. When the agent interacts with the environment, she may regret her actions, realizing that a previous choice was suboptimal given the behavior of the environment. The main contribution of this paper is a PSPACE algorithm for computing the minimum possible regret of a given game. To this end, several results of independent interest are shown. (1) We identify a class of regret-minimizing and admissible strategies that first assume that the environment is collaborating, then assume it is adversarial---the precise timing of the switch is key here. (2) Disregarding the computational cost of numerical analysis, we provide an NP algorithm that checks that the regret entailed by a given time-switching strategy exceeds a given value. (3) We show that determining whether a strategy minimizes regret is decidable in PSPACE

    PPP-Completeness with Connections to Cryptography

    Get PDF
    Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with profound connections to the complexity of the fundamental cryptographic primitives: collision-resistant hash functions and one-way permutations. In contrast to most of the other subclasses of TFNP, no complete problem is known for PPP. Our work identifies the first PPP-complete problem without any circuit or Turing Machine given explicitly in the input, and thus we answer a longstanding open question from [Papadimitriou1994]. Specifically, we show that constrained-SIS (cSIS), a generalized version of the well-known Short Integer Solution problem (SIS) from lattice-based cryptography, is PPP-complete. In order to give intuition behind our reduction for constrained-SIS, we identify another PPP-complete problem with a circuit in the input but closely related to lattice problems. We call this problem BLICHFELDT and it is the computational problem associated with Blichfeldt's fundamental theorem in the theory of lattices. Building on the inherent connection of PPP with collision-resistant hash functions, we use our completeness result to construct the first natural hash function family that captures the hardness of all collision-resistant hash functions in a worst-case sense, i.e. it is natural and universal in the worst-case. The close resemblance of our hash function family with SIS, leads us to the first candidate collision-resistant hash function that is both natural and universal in an average-case sense. Finally, our results enrich our understanding of the connections between PPP, lattice problems and other concrete cryptographic assumptions, such as the discrete logarithm problem over general groups

    Optimality and resilience in parity games

    Get PDF
    Modeling reactive systems as infinite games has yielded a multitude of results in the fields of program verification and program synthesis. The canonical parity condition, however, neither suffices to express non-functional requirements on the modeled system, nor to capture malfunctions of the deployed system. We address these issues by investigating quantitative games in which the above characteristics can be expressed. Parity games with costs are a variant of parity games in which traversing an edge incurs some nonnegative cost. The cost of a play is the limit superior of the cost incurred between answering odd colors by larger even ones. We extend that model by using integer costs, obtaining parity games with weights, and show that the problem of solving such games is in the intersection of NP and coNP and that it is PTIME-equivalent to the problem of solving energy parity games. We moreover show that Player 0 requires exponential memory to implement a winning strategy in parity games with weights. Further, we show that the problem of determining whether Player 0 can keep the cost of a play below a given bound is EXPTIME-complete for parity games with weights and PSPACE-complete for the special cases of parity games with costs and finitary parity games, i.e., it is harder than solving the game. Thus, optimality comes at a price even in finitary parity games. We further determine the complexity of computing strategies in parity games that are resilient against malfunctions. We show that such strategies can be effectively computed and that this is as hard as solving the game without disturbances. Finally, we combine all these aspects and show that Player 0 can trade memory, cost, and resilience for one another. Furthermore, we show how to compute the possible tradeoffs for a given game.Die Modellierung von reaktiven Systemen durch unendliche Spiele ermöglichte zahlreiche Fortschritte in der Programmverifikation und der Programmsynthese. Die häufig genutzte Paritätsbedingung kann jedoch weder nichtfunktionale Anforderungen ausdrücken, noch Fehlfunktionen des Systems modellieren. Wir betrachten quantitative Spiele in denen diese Merkmale ausgedrückt werden können. Paritätsspiele mit Kosten (PSK) sind eine Variante der Paritätsspiele in denen die Benutzung einer Kante nichtnegative Kosten verursacht. Die Kosten einer Partie sind der Limes Superior der Kosten zwischen ungeraden und den jeweils nächsten größeren geraden Farben. Wir erweitern dieses Modell durch ganzzahlige Kosten zu Paritätsspielen mit Gewichten (PSG). Wir zeigen, dass das Lösen dieser Spiele im Schnitt von NP und coNP liegt, dass es PTIME-äquivalent dazu ist, Energieparitätsspiele zu lösen und dass Spieler 0 exponentiellen Speicher benötigt, um zu gewinnen. Ferner zeigen wir, dass das Problem, zu entscheiden, ob Spieler 0 die Kosten eines Spiels unter einer gegebenen Schranke halten kann, EXPTIME-vollständig für PSG ist, sowie dass es PSPACE-vollständig für die Spezialfälle PSK und finitäre Paritätsspiele (FPS) ist. Optimalität ist also selbst in FPS nicht kostenlos. Außerdem bestimmen wir die Komplexität davon, Strategien in Paritätsspielen zu berechnen, die robust gegenüber Fehlfunktionen sind, zeigen, dass solche Strategien effektiv berechnet werden können und beweisen, dass dies nur linearen Mehraufwand bedeutet. Darüberhinaus kombinieren wir die oben genannten Aspekte, zeigen, dass Spieler 0 Speicher, Kosten und Robustheit gegeneinander eintauschen kann und berechnen die möglichen Kompromisse

    The complexity of joint computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 253-266).Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1-[epsilon] in computing the Boolean function f on input distribution [mu], then for [alpha] 0, the worst-case success probability of any [alpha]R₂(f)k-query randomized algorithm for f k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O(bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f*k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1,... , fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by Savický. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators: We study the circuit complexity of Boolean operators, i.e., collections of Boolean functions defined over a common input. Our focus is the well-studied model in which arbitrary Boolean functions are allowed as gates, and in which a circuit's complexity is measured by its depth and number of wires. We show sharp limitations of several existing lower-bound methods for this model. First, we study an information-theoretic lower-bound method due to Cherukhin, which gave the first improvement over the lower bounds provided by the well-known superconcentrator technique for constant depths. (The lower bounds are still barelysuperlinear, however) Cherukhin's method was formalized by Jukna as a general lower-bound criterion for Boolean operators, the "Strong Multiscale Entropy" (SME) property. It seemed plausible that this property could imply significantly better lower bounds by an improved analysis. However, we show that this is not the case, by exhibiting an explicit operator with the SME property that is computable in constant depths whose wire-complexity essentially matches the Cherukhin-Jukna lower bound (to within a constant multiplicative factor, for depths d = 2,3 and for even depths d >/= 6). Next, we show limitations of two simpler lower-bound criteria given by Jukna: the "entropy method" for general operators, and the "pairwise-distance method" for linear operators. We show that neither method gives super-linear lower bounds for depth 3. In the process, we obtain the first known polynomial separation between the depth-2 and depth-3 wire complexities for an explicit operator. We also continue the study (initiated by Jukna) of the complexity of "representing" a linear operator by bounded-depth circuits, a weaker notion than computing the operator. New limits to classical and quantum instance compression: Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem. As a representative problem, say we are given Boolean formulas [psi]1,... ,[psi]t, each of length n << t, and we want to determine if at least one [psi]j is satisfiable. Can we efficiently reduce this "OR-SAT" question to an equivalent problem instance (of SAT or another problem) of size poly(n), independent of t? We call any such reduction a "strong compression" reduction for OR-SAT. This would amount to a major gain from compressing [psi]1,. .. , [psi]t jointly, since we know of no way to reliably compress an individual SAT instance. Harnik and Naor (FOCS '06/SICOMP '10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP '08/JCSS '09) showed that the infeasibility of strong compression for OR-SAT would also imply limits to instance compression schemes for a large number of other, natural problems; this is significant because instance compression is a central technique in the design of so-called fixed-parameter tractable algorithms. Bodlaender et al. also showed that the infeasibility of strong compression for the analogous "AND-SAT" problem would establish limits to instance compression for another family of problems. Fortnow and Santhanam (STOC '08) showed that deterministic (or 1-sided error randomized) strong compression for OR-SAT is not possible unless NP C coNP/poly; the case of AND-SAT remained mysterious. We give new and improved evidence against strong compression schemes for both OR-SAT and AND-SAT; our method applies to probabilistic compression schemes with 2-sided error. We also give versions of these results for an analogous task of quantum instance compression, in which a polynomial-time quantum reduction must output a quantum state that, in an appropriate sense, "preserves the answer" to the input instance. We give quantitatively similar evidence against strong compression for AND- and OR-SAT in this setting, albeit under less well-studied hypotheses about the relationship between NP and quantum complexity classes. To prove all of these results, we exploit the information bottleneck of an instance compression scheme, using a new method to "disguise" information being fed into a compressive mapping.by Andrew Donald Drucker.Ph.D
    • …
    corecore