1,030 research outputs found

    On the complexity of solving linear congruences and computing nullspaces modulo a constant

    Full text link
    We consider the problems of determining the feasibility of a linear congruence, producing a solution to a linear congruence, and finding a spanning set for the nullspace of an integer matrix, where each problem is considered modulo an arbitrary constant k>1. These problems are known to be complete for the logspace modular counting classes {Mod_k L} = {coMod_k L} in special case that k is prime (Buntrock et al, 1992). By considering variants of standard logspace function classes --- related to #L and functions computable by UL machines, but which only characterize the number of accepting paths modulo k --- we show that these problems of linear algebra are also complete for {coMod_k L} for any constant k>1. Our results are obtained by defining a class of functions FUL_k which are low for {Mod_k L} and {coMod_k L} for k>1, using ideas similar to those used in the case of k prime in (Buntrock et al, 1992) to show closure of Mod_k L under NC^1 reductions (including {Mod_k L} oracle reductions). In addition to the results above, we briefly consider the relationship of the class FUL_k for arbitrary moduli k to the class {F.coMod_k L} of functions whose output symbols are verifiable by {coMod_k L} algorithms; and consider what consequences such a comparison may have for oracle closure results of the form {Mod_k L}^{Mod_k L} = {Mod_k L} for composite k.Comment: 17 pages, one Appendix; minor corrections and revisions to presentation, new observations regarding the prospect of oracle closures. Comments welcom

    The complexity and distribution of computationally useful problems

    Get PDF
    The solutions of certain natural decision problems such as the halting problem and the boolean satisfiability problem contain large amounts of useful information about computation that is highly organized and readily available to efficient computational processes. Such problems are computationally useful. This dissertation investigates the complexity and distribution of these computationally useful problems. The main results of this dissertation are of the following three general types. (1) Useful problems contain highly organized information. (2) Very useful problems are so highly organized that they are unusually simple and hence rare. (3) Useful problems are, as a whole, not rare and thus are not necessarily simple;A result of type (1) is proven in Chapter 3. Bennett recently extended algorithmic information theory to include a notion of computational depth that appears to quantify the level of organization in binary strings and sequences. The main result of Chapter 3 states that every weakly useful sequence is strongly deep. (A sequence x is weakly useful if a non-negligible set of recursive problems are decidable within a fixed recursive time bound when given access to x.);Results of type (2) are presented in Chapters 4 and 5. These results say that the ≤[subscript]sp m P-complete problems for E = DTIME(2[superscript] linear) and the ≤[subscript]sp m p/poly-complete problems for ESPACE = DSPACE(2[superscript] linear) are unusually simple and hence rare. Complete problems are very useful because every problem in E or ESPACE is efficiently decidable when given access to one of these problems;Chapter 6 develops a result of type (3). This result says that the weakly ≤[subscript]sp m P-complete problems for E and ESPACE are not rare and hence are not necessarily simple. Weakly complete problems are useful because every problem in a non-negligible subset of E or ESPACE is efficiently decidable when given access to one of these problems;The above results (and others along the way) are obtained through a systematic investigation of the measure-theoretic structure of complexity classes

    Characterization and Lower Bounds for Branching Program Size using Projective Dimension

    Get PDF
    We study projective dimension, a graph parameter (denoted by pd(G)(G) for a graph GG), introduced by (Pudl\'ak, R\"odl 1992), who showed that proving lower bounds for pd(Gf)(G_f) for bipartite graphs GfG_f associated with a Boolean function ff imply size lower bounds for branching programs computing ff. Despite several attempts (Pudl\'ak, R\"odl 1992 ; Babai, R\'{o}nyai, Ganapathy 2000), proving super-linear lower bounds for projective dimension of explicit families of graphs has remained elusive. We show that there exist a Boolean function ff (on nn bits) for which the gap between the projective dimension and size of the optimal branching program computing ff (denoted by bpsize(f)(f)), is 2Ω(n)2^{\Omega(n)}. Motivated by the argument in (Pudl\'ak, R\"odl 1992), we define two variants of projective dimension - projective dimension with intersection dimension 1 (denoted by upd(G)(G)) and bitwise decomposable projective dimension (denoted by bitpdim(G)(G)). As our main result, we show that there is an explicit family of graphs on N=2nN = 2^n vertices such that the projective dimension is O(n)O(\sqrt{n}), the projective dimension with intersection dimension 11 is Ω(n)\Omega(n) and the bitwise decomposable projective dimension is Ω(n1.5logn)\Omega(\frac{n^{1.5}}{\log n}). We also show that there exist a Boolean function ff (on nn bits) for which the gap between upd(Gf)(G_f) and bpsize(f)(f) is 2Ω(n)2^{\Omega(n)}. In contrast, we also show that the bitwise decomposable projective dimension characterizes size of the branching program up to a polynomial factor. That is, there exists a constant c>0c>0 and for any function ff, bitpdim(Gf)/6bpsize(f)(bitpdim(Gf))c\textrm{bitpdim}(G_f)/6 \le \textrm{bpsize}(f) \le (\textrm{bitpdim}(G_f))^c. We also study two other variants of projective dimension and show that they are exactly equal to well-studied graph parameters - bipartite clique cover number and bipartite partition number respectively.Comment: 24 pages, 3 figure

    PPP-Completeness with Connections to Cryptography

    Get PDF
    Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with profound connections to the complexity of the fundamental cryptographic primitives: collision-resistant hash functions and one-way permutations. In contrast to most of the other subclasses of TFNP, no complete problem is known for PPP. Our work identifies the first PPP-complete problem without any circuit or Turing Machine given explicitly in the input, and thus we answer a longstanding open question from [Papadimitriou1994]. Specifically, we show that constrained-SIS (cSIS), a generalized version of the well-known Short Integer Solution problem (SIS) from lattice-based cryptography, is PPP-complete. In order to give intuition behind our reduction for constrained-SIS, we identify another PPP-complete problem with a circuit in the input but closely related to lattice problems. We call this problem BLICHFELDT and it is the computational problem associated with Blichfeldt's fundamental theorem in the theory of lattices. Building on the inherent connection of PPP with collision-resistant hash functions, we use our completeness result to construct the first natural hash function family that captures the hardness of all collision-resistant hash functions in a worst-case sense, i.e. it is natural and universal in the worst-case. The close resemblance of our hash function family with SIS, leads us to the first candidate collision-resistant hash function that is both natural and universal in an average-case sense. Finally, our results enrich our understanding of the connections between PPP, lattice problems and other concrete cryptographic assumptions, such as the discrete logarithm problem over general groups

    The Quantitative Structure of Exponential Time

    Get PDF
    Recent results on the internal, measure-theoretic structure of the exponential time complexity classes E = DTIME(2^linear) and E2 = DTIME(2^polynomial) are surveyed. The measure structure of these classes is seen to interact in informative ways with bi-immunity, complexity cores, polynomial-time many-one reducibility, circuit-size complexity, Kolmogorov complexity, and the density of hard languages. Possible implications for the structure of NP are also discussed

    The Quantitative Structure of Exponential Time

    Get PDF
    Department of Computer Science Iowa State University Ames, Iowa 50010 Recent results on the internal, measure-theoretic structure of the exponential time complexity classes linear polynomial E = DTIME(2 ) and E = DTIME(2 ) 2 are surveyed. The measure structure of these classes is seen to interact in informative ways with bi-immunity, complexity cores, polynomial-time many-one reducibility, circuit-size complexity, Kolmogorov complexity, and the density of hard languages. Possible implications for the structure of NP are also discussed

    Computational Difficulty of Computing the Density of States

    Get PDF
    We study the computational difficulty of computing the ground state degeneracy and the density of states for local Hamiltonians. We show that the difficulty of both problems is exactly captured by a class which we call #BQP, which is the counting version of the quantum complexity class QMA. We show that #BQP is not harder than its classical counting counterpart #P, which in turn implies that computing the ground state degeneracy or the density of states for classical Hamiltonians is just as hard as it is for quantum Hamiltonians.Comment: v2: Accepted version. 9 pages, 1 figur
    corecore