4,939 research outputs found

    Tight bounds and conjectures for the isolation lemma

    Full text link
    Given a hypergraph HH and a weight function w:V{1,,M}w: V \rightarrow \{1, \dots, M\} on its vertices, we say that ww is isolating if there is exactly one edge of minimum weight w(e)=iew(i)w(e) = \sum_{i \in e} w(i). The Isolation Lemma is a combinatorial principle introduced in Mulmuley et. al (1987) which gives a lower bound on the number of isolating weight functions. Mulmuley used this as the basis of a parallel algorithm for finding perfect graph matchings. It has a number of other applications to parallel algorithms and to reductions of general search problems to unique search problems (in which there are one or zero solutions). The original bound given by Mulmuley et al. was recently improved by Ta-Shma (2015). In this paper, we show improved lower bounds on the number of isolating weight functions, and we conjecture that the extremal case is when HH consists of nn singleton edges. When MnM \gg n our improved bound matches this extremal case asymptotically. We are able to show that this conjecture holds in a number of special cases: when HH is a linear hypergraph or is 1-degenerate, or when M=2M = 2. We also show that it holds asymptotically when Mn1M \gg n \gg 1

    Space-Efficient Interior Point Method, with Applications to Linear Programming and Maximum Weight Bipartite Matching

    Get PDF

    Isolation Schemes for Problems on Decomposable Graphs

    Get PDF
    The Isolation Lemma of Mulmuley, Vazirani and Vazirani [Combinatorica'87] provides a self-reduction scheme that allows one to assume that a given instance of a problem has a unique solution, provided a solution exists at all. Since its introduction, much effort has been dedicated towards derandomization of the Isolation Lemma for specific classes of problems. So far, the focus was mainly on problems solvable in polynomial time. In this paper, we study a setting that is more typical for NP\mathsf{NP}-complete problems, and obtain partial derandomizations in the form of significantly decreasing the number of required random bits. In particular, motivated by the advances in parameterized algorithms, we focus on problems on decomposable graphs. For example, for the problem of detecting a Hamiltonian cycle, we build upon the rank-based approach from [Bodlaender et al., Inf. Comput.'15] and design isolation schemes that use - O(tlogn+log2n)O(t\log n + \log^2{n}) random bits on graphs of treewidth at most tt; - O(n)O(\sqrt{n}) random bits on planar or HH-minor free graphs; and - O(n)O(n)-random bits on general graphs. In all these schemes, the weights are bounded exponentially in the number of random bits used. As a corollary, for every fixed HH we obtain an algorithm for detecting a Hamiltonian cycle in an HH-minor-free graph that runs in deterministic time 2O(n)2^{O(\sqrt{n})} and uses polynomial space; this is the first algorithm to achieve such complexity guarantees. For problems of more local nature, such as finding an independent set of maximum size, we obtain isolation schemes on graphs of treedepth at most dd that use O(d)O(d) random bits and assign polynomially-bounded weights. We also complement our findings with several unconditional and conditional lower bounds, which show that many of the results cannot be significantly improved

    On the Lattice Isomorphism Problem

    Full text link
    We study the Lattice Isomorphism Problem (LIP), in which given two lattices L_1 and L_2 the goal is to decide whether there exists an orthogonal linear transformation mapping L_1 to L_2. Our main result is an algorithm for this problem running in time n^{O(n)} times a polynomial in the input size, where n is the rank of the input lattices. A crucial component is a new generalized isolation lemma, which can isolate n linearly independent vectors in a given subset of Z^n and might be useful elsewhere. We also prove that LIP lies in the complexity class SZK.Comment: 23 pages, SODA 201

    Derandomizing Isolation in Space-Bounded Settings

    Get PDF
    We study the possibility of deterministic and randomness-efficient isolation in space-bounded models of computation: Can one efficiently reduce instances of computational problems to equivalent instances that have at most one solution? We present results for the NL-complete problem of reachability on digraphs, and for the LogCFL-complete problem of certifying acceptance on shallow semi-unbounded circuits. A common approach employs small weight assignments that make the solution of minimum weight unique. The Isolation Lemma and other known procedures use Omega(n) random bits to generate weights of individual bitlength O(log(n)). We develop a derandomized version for both settings that uses O(log(n)^{3/2}) random bits and produces weights of bitlength O(log(n)^{3/2}) in logarithmic space. The construction allows us to show that every language in NL can be accepted by a nondeterministic machine that runs in polynomial time and O(log(n)^{3/2}) space, and has at most one accepting computation path on every input. Similarly, every language in LogCFL can be accepted by a nondeterministic machine equipped with a stack that does not count towards the space bound, that runs in polynomial time and O(log(n)^{3/2}) space, and has at most one accepting computation path on every input. We also show that the existence of somewhat more restricted isolations for reachability on digraphs implies that NL can be decided in logspace with polynomial advice. A similar result holds for certifying acceptance on shallow semi-unbounded circuits and LogCFL

    Matroid Intersection: A Pseudo-Deterministic Parallel Reduction from Search to Weighted-Decision

    Get PDF
    We study the matroid intersection problem from the parallel complexity perspective. Given two matroids over the same ground set, the problem asks to decide whether they have a common base and its search version asks to find a common base, if one exists. Another widely studied variant is the weighted decision version where with the two matroids, we are given small weights on the ground set elements and a target weight W, and the question is to decide whether there is a common base of weight at least W. From the perspective of parallel complexity, the relation between the search and the decision versions is not well understood. We make a significant progress on this question by giving a pseudo-deterministic parallel (NC) algorithm for the search version that uses an oracle access to the weighted decision. The notion of pseudo-deterministic NC was recently introduced by Goldwasser and Grossman [Shafi Goldwasser and Ofer Grossman, 2017], which is a relaxation of NC. A pseudo-deterministic NC algorithm for a search problem is a randomized NC algorithm that, for a given input, outputs a fixed solution with high probability. In case the given matroids are linearly representable, our result implies a pseudo-deterministic NC algorithm (without the weighted decision oracle). This resolves an open question posed by Anari and Vazirani [Nima Anari and Vijay V. Vazirani, 2020]

    Breaking the nn-Pass Barrier: A Streaming Algorithm for Maximum Weight Bipartite Matching

    Full text link
    Given a weighted bipartite graph with nn vertices and mm edges, the \emph{maximum weight bipartite matching} problem is to find a set of vertex-disjoint edges with the maximum weight. This classic problem has been extensively studied for over a century. In this paper, we present a new streaming algorithm for the maximum weight bipartite matching problem that uses O~(n)\widetilde{O}(n) space and O~(m)\widetilde{O}(\sqrt{m}) passes, which breaks the nn-pass barrier. All the previous streaming algorithms either require Ω(nlogn)\Omega(n \log n) passes or only find an approximate solution. Our streaming algorithm constructs a subgraph with nn edges of the input graph in O~(m)\widetilde{O}(\sqrt{m}) passes, such that the subgraph admits the optimal matching with good probability. Our method combines various ideas from different fields, most notably the construction of \emph{space-efficient} interior point method (IPM), SDD system solvers, the isolation lemma, and LP duality. To the best of our knowledge, this is the first work that implements the SDD solvers and IPMs in the streaming model in O~(n)\widetilde{O}(n) spaces for graph matrices; previous IPM algorithms only focus on optimizing the running time, regardless of the space usage

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
    corecore