62,266 research outputs found

    Bipartite Perfect Matching in Pseudo-Deterministic NC

    Get PDF
    We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph. We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art. Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm

    Range Avoidance for Constant Depth Circuits: Hardness and Algorithms

    Get PDF
    Range Avoidance (Avoid) is a total search problem where, given a Boolean circuit ?: {0,1}? ? {0,1}^m, m > n, the task is to find a y ? {0,1}^m outside the range of ?. For an integer k ? 2, NC?_k-Avoid is a special case of Avoid where each output bit of ? depends on at most k input bits. While there is a very natural randomized algorithm for Avoid, a deterministic algorithm for the problem would have many interesting consequences. Ren, Santhanam, and Wang (FOCS 2022) and Guruswami, Lyu, and Wang (RANDOM 2022) proved that explicit constructions of functions of high formula complexity, rigid matrices, and optimal linear codes, reduce to NC??-Avoid, thus establishing conditional hardness of the NC??-Avoid problem. On the other hand, NC??-Avoid admits polynomial-time algorithms, leaving the question about the complexity of NC??-Avoid open. We give the first reduction of an explicit construction question to NC??-Avoid. Specifically, we prove that a polynomial-time algorithm (with an NP oracle) for NC??-Avoid for the case of m = n+n^{2/3} would imply an explicit construction of a rigid matrix, and, thus, a super-linear lower bound on the size of log-depth circuits. We also give deterministic polynomial-time algorithms for all NC?_k-Avoid problems for m ? n^{k-1}/log(n). Prior work required an NP oracle, and required larger stretch, m ? n^{k-1}

    Range Avoidance for Constant-Depth Circuits: Hardness and Algorithms

    Full text link
    Range Avoidance (AVOID) is a total search problem where, given a Boolean circuit C ⁣:{0,1}n{0,1}mC\colon\{0,1\}^n\to\{0,1\}^m, m>nm>n, the task is to find a y{0,1}my\in\{0,1\}^m outside the range of CC. For an integer k2k\geq 2, NCk0\mathrm{NC}^0_k-AVOID is a special case of AVOID where each output bit of CC depends on at most kk input bits. While there is a very natural randomized algorithm for AVOID, a deterministic algorithm for the problem would have many interesting consequences. Ren, Santhanam, and Wang (FOCS 2022) and Guruswami, Lyu, and Wang (RANDOM 2022) proved that explicit constructions of functions of high formula complexity, rigid matrices, and optimal linear codes, reduce to NC40\mathrm{NC}^0_4-AVOID, thus establishing conditional hardness of the NC40\mathrm{NC}^0_4-AVOID problem. On the other hand, NC20\mathrm{NC}^0_2-AVOID admits polynomial-time algorithms, leaving the question about the complexity of NC30\mathrm{NC}^0_3-AVOID open. We give the first reduction of an explicit construction question to NC30\mathrm{NC}^0_3-AVOID. Specifically, we prove that a polynomial-time algorithm (with an NP\mathrm{NP} oracle) for NC30\mathrm{NC}^0_3-AVOID for the case of m=n+n2/3m=n+n^{2/3} would imply an explicit construction of a rigid matrix, and, thus, a super-linear lower bound on the size of log-depth circuits. We also give deterministic polynomial-time algorithms for all NCk0\mathrm{NC}^0_k-AVOID problems for mnk1/log(n)m\geq n^{k-1}/\log(n). Prior work required an NP\mathrm{NP} oracle, and required larger stretch, mnk1m \geq n^{k-1}.Comment: 19 page

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    The Adaptive Sampling Revisited

    Full text link
    The problem of estimating the number nn of distinct keys of a large collection of NN data is well known in computer science. A classical algorithm is the adaptive sampling (AS). nn can be estimated by R.2DR.2^D, where RR is the final bucket (cache) size and DD is the final depth at the end of the process. Several new interesting questions can be asked about AS (some of them were suggested by P.Flajolet and popularized by J.Lumbroso). The distribution of W=log(R2D/n)W=\log (R2^D/n) is known, we rederive this distribution in a simpler way. We provide new results on the moments of DD and WW. We also analyze the final cache size RR distribution. We consider colored keys: assume that among the nn distinct keys, nCn_C do have color CC. We show how to estimate p=nCnp=\frac{n_C}{n}. We also study colored keys with some multiplicity given by some distribution function. We want to estimate mean an variance of this distribution. Finally, we consider the case where neither colors nor multiplicities are known. There we want to estimate the related parameters. An appendix is devoted to the case where the hashing function provides bits with probability different from 1/21/2
    corecore