36 research outputs found

    Computing exact solutions of consensus halving and the Borsuk-Ulam theorem

    Get PDF
    We study the problem of finding an exact solution to the consensus halving problem. While recent work has shown that the approximate version of this problem is PPA-complete, we show that the exact version is much harder. Specifically, finding a solution with nn cuts is FIXP-hard, and deciding whether there exists a solution with fewer than nn cuts is ETR-complete. We also give a QPTAS for the case where each agent's valuation is a polynomial. Along the way, we define a new complexity class BU, which captures all problems that can be reduced to solving an instance of the Borsuk-Ulam problem exactly. We show that FIXP βŠ†\subseteq BU βŠ†\subseteq TFETR and that LinearBU == PPA, where LinearBU is the subclass of BU in which the Borsuk-Ulam instance is specified by a linear arithmetic circuit

    ??-Completeness of Stationary Nash Equilibria in Perfect Information Stochastic Games

    Get PDF
    We show that the problem of deciding whether in a multi-player perfect information recursive game (i.e. a stochastic game with terminal rewards) there exists a stationary Nash equilibrium ensuring each player a certain payoff is ??-complete. Our result holds for acyclic games, where a Nash equilibrium may be computed efficiently by backward induction, and even for deterministic acyclic games with non-negative terminal rewards. We further extend our results to the existence of Nash equilibria where a single player is surely winning. Combining our result with known gadget games without any stationary Nash equilibrium, we obtain that for cyclic games, just deciding existence of any stationary Nash equilibrium is ??-complete. This holds for reach-a-set games, stay-in-a-set games, and for deterministic recursive games

    Recognising Multidimensional Euclidean Preferences

    Full text link
    Euclidean preferences are a widely studied preference model, in which decision makers and alternatives are embedded in d-dimensional Euclidean space. Decision makers prefer those alternatives closer to them. This model, also known as multidimensional unfolding, has applications in economics, psychometrics, marketing, and many other fields. We study the problem of deciding whether a given preference profile is d-Euclidean. For the one-dimensional case, polynomial-time algorithms are known. We show that, in contrast, for every other fixed dimension d > 1, the recognition problem is equivalent to the existential theory of the reals (ETR), and so in particular NP-hard. We further show that some Euclidean preference profiles require exponentially many bits in order to specify any Euclidean embedding, and prove that the domain of d-Euclidean preferences does not admit a finite forbidden minor characterisation for any d > 1. We also study dichotomous preferencesand the behaviour of other metrics, and survey a variety of related work.Comment: 17 page

    The Computational Complexity of Genetic Diversity

    Get PDF
    A key question in biological systems is whether genetic diversity persists in the long run under evolutionary competition, or whether a single dominant genotype emerges. Classic work by [Kalmus, J. og Genetics, 1945] has established that even in simple diploid species (species with chromosome pairs) diversity can be guaranteed as long as the heterozygous (having different alleles for a gene on two chromosomes) individuals enjoy a selective advantage. Despite the classic nature of the problem, as we move towards increasingly polymorphic traits (e.g., human blood types) predicting diversity (and its implications) is still not fully understood. Our key contribution is to establish complexity theoretic hardness results implying that even in the textbook case of single locus (gene) diploid models, predicting whether diversity survives or not given its fitness landscape is algorithmically intractable. Our hardness results are structurally robust along several dimensions, e.g., choice of parameter distribution, different definitions of stability/persistence, restriction to typical subclasses of fitness landscapes. Technically, our results exploit connections between game theory, nonlinear dynamical systems, and complexity theory and establish hardness results for predicting the evolution of a deterministic variant of the well known multiplicative weights update algorithm in symmetric coordination games; finding one Nash equilibrium is easy in these games. In the process we characterize stable fixed points of these dynamics using the notions of Nash equilibrium and negative semidefiniteness. This as well as hardness results for decision problems in coordination games may be of independent interest. Finally, we complement our results by establishing that under randomly chosen fitness landscapes diversity survives with significant probability. The full version of this paper is available at http://arxiv.org/abs/1411.6322

    Computing Exact Solutions of Consensus Halving and the Borsuk-Ulam Theorem

    Get PDF
    We study the problem of finding an exact solution to the consensus halving problem. While recent work has shown that the approximate version of this problem is PPA-complete, we show that the exact version is much harder. Specifically, finding a solution with nn cuts is FIXP-hard, and deciding whether there exists a solution with fewer than nn cuts is ETR-complete. We also give a QPTAS for the case where each agent's valuation is a polynomial. Along the way, we define a new complexity class BU, which captures all problems that can be reduced to solving an instance of the Borsuk-Ulam problem exactly. We show that FIXP βŠ†\subseteq BU βŠ†\subseteq TFETR and that LinearBU == PPA, where LinearBU is the subclass of BU in which the Borsuk-Ulam instance is specified by a linear arithmetic circuit

    The Complexity of Recognizing Geometric Hypergraphs

    Full text link
    As set systems, hypergraphs are omnipresent and have various representations ranging from Euler and Venn diagrams to contact representations. In a geometric representation of a hypergraph H=(V,E)H=(V,E), each vertex v∈Vv\in V is associated with a point pv∈Rdp_v\in \mathbb{R}^d and each hyperedge e∈Ee\in E is associated with a connected set seβŠ‚Rds_e\subset \mathbb{R}^d such that {pv∣v∈V}∩se={pv∣v∈e}\{p_v\mid v\in V\}\cap s_e=\{p_v\mid v\in e\} for all e∈Ee\in E. We say that a given hypergraph HH is representable by some (infinite) family FF of sets in Rd\mathbb{R}^d, if there exist PβŠ‚RdP\subset \mathbb{R}^d and SβŠ†FS \subseteq F such that (P,S)(P,S) is a geometric representation of HH. For a family F, we define RECOGNITION(F) as the problem to determine if a given hypergraph is representable by F. It is known that the RECOGNITION problem is βˆƒR\exists\mathbb{R}-hard for halfspaces in Rd\mathbb{R}^d. We study the families of translates of balls and ellipsoids in Rd\mathbb{R}^d, as well as of other convex sets, and show that their RECOGNITION problems are also βˆƒR\exists\mathbb{R}-complete. This means that these recognition problems are equivalent to deciding whether a multivariate system of polynomial equations with integer coefficients has a real solution.Comment: Appears in the Proceedings of the 31st International Symposium on Graph Drawing and Network Visualization (GD 2023) 17 pages, 11 figure

    Training Fully Connected Neural Networks is βˆƒR\exists\mathbb{R}-Complete

    Full text link
    We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully connected neural network to fit a given set of data points. This problem is known as empirical risk minimization in the machine learning community. We show that the problem is βˆƒR\exists\mathbb{R}-complete. This complexity class can be defined as the set of algorithmic problems that are polynomial-time equivalent to finding real roots of a polynomial with integer coefficients. Our results hold even if the following restrictions are all added simultaneously. βˆ™\bullet There are exactly two output neurons. βˆ™\bullet There are exactly two input neurons. βˆ™\bullet The data has only 13 different labels. βˆ™\bullet The number of hidden neurons is a constant fraction of the number of data points. βˆ™\bullet The target training error is zero. βˆ™\bullet The ReLU activation function is used. This shows that even very simple networks are difficult to train. The result offers an explanation (though far from a complete understanding) on why only gradient descent is widely successful in training neural networks in practice. We generalize a recent result by Abrahamsen, Kleist and Miltzow [NeurIPS 2021]. This result falls into a recent line of research that tries to unveil that a series of central algorithmic problems from widely different areas of computer science and mathematics are βˆƒR\exists\mathbb{R}-complete: This includes the art gallery problem [JACM/STOC 2018], geometric packing [FOCS 2020], covering polygons with convex polygons [FOCS 2021], and continuous constraint satisfaction problems [FOCS 2021].Comment: 38 pages, 18 figure
    corecore