62,266 research outputs found
Bipartite Perfect Matching in Pseudo-Deterministic NC
We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph.
We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art.
Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm
Range Avoidance for Constant Depth Circuits: Hardness and Algorithms
Range Avoidance (Avoid) is a total search problem where, given a Boolean circuit ?: {0,1}? ? {0,1}^m, m > n, the task is to find a y ? {0,1}^m outside the range of ?. For an integer k ? 2, NC?_k-Avoid is a special case of Avoid where each output bit of ? depends on at most k input bits. While there is a very natural randomized algorithm for Avoid, a deterministic algorithm for the problem would have many interesting consequences. Ren, Santhanam, and Wang (FOCS 2022) and Guruswami, Lyu, and Wang (RANDOM 2022) proved that explicit constructions of functions of high formula complexity, rigid matrices, and optimal linear codes, reduce to NC??-Avoid, thus establishing conditional hardness of the NC??-Avoid problem. On the other hand, NC??-Avoid admits polynomial-time algorithms, leaving the question about the complexity of NC??-Avoid open.
We give the first reduction of an explicit construction question to NC??-Avoid. Specifically, we prove that a polynomial-time algorithm (with an NP oracle) for NC??-Avoid for the case of m = n+n^{2/3} would imply an explicit construction of a rigid matrix, and, thus, a super-linear lower bound on the size of log-depth circuits.
We also give deterministic polynomial-time algorithms for all NC?_k-Avoid problems for m ? n^{k-1}/log(n). Prior work required an NP oracle, and required larger stretch, m ? n^{k-1}
Range Avoidance for Constant-Depth Circuits: Hardness and Algorithms
Range Avoidance (AVOID) is a total search problem where, given a Boolean
circuit , , the task is to find a
outside the range of . For an integer ,
-AVOID is a special case of AVOID where each output bit of
depends on at most input bits. While there is a very natural randomized
algorithm for AVOID, a deterministic algorithm for the problem would have many
interesting consequences. Ren, Santhanam, and Wang (FOCS 2022) and Guruswami,
Lyu, and Wang (RANDOM 2022) proved that explicit constructions of functions of
high formula complexity, rigid matrices, and optimal linear codes, reduce to
-AVOID, thus establishing conditional hardness of the
-AVOID problem. On the other hand, -AVOID
admits polynomial-time algorithms, leaving the question about the complexity of
-AVOID open.
We give the first reduction of an explicit construction question to
-AVOID. Specifically, we prove that a polynomial-time
algorithm (with an oracle) for -AVOID for the
case of would imply an explicit construction of a rigid matrix,
and, thus, a super-linear lower bound on the size of log-depth circuits.
We also give deterministic polynomial-time algorithms for all
-AVOID problems for . Prior work
required an oracle, and required larger stretch, .Comment: 19 page
Parallel Batch-Dynamic Graph Connectivity
In this paper, we study batch parallel algorithms for the dynamic
connectivity problem, a fundamental problem that has received considerable
attention in the sequential setting. The most well known sequential algorithm
for dynamic connectivity is the elegant level-set algorithm of Holm, de
Lichtenberg and Thorup (HDT), which achieves amortized time per
edge insertion or deletion, and time per query. We
design a parallel batch-dynamic connectivity algorithm that is work-efficient
with respect to the HDT algorithm for small batch sizes, and is asymptotically
faster when the average batch size is sufficiently large. Given a sequence of
batched updates, where is the average batch size of all deletions, our
algorithm achieves expected amortized work per
edge insertion and deletion and depth w.h.p. Our algorithm
answers a batch of connectivity queries in expected
work and depth w.h.p. To the best of our knowledge, our algorithm
is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
The Adaptive Sampling Revisited
The problem of estimating the number of distinct keys of a large
collection of data is well known in computer science. A classical algorithm
is the adaptive sampling (AS). can be estimated by , where is
the final bucket (cache) size and is the final depth at the end of the
process. Several new interesting questions can be asked about AS (some of them
were suggested by P.Flajolet and popularized by J.Lumbroso). The distribution
of is known, we rederive this distribution in a simpler way.
We provide new results on the moments of and . We also analyze the final
cache size distribution. We consider colored keys: assume that among the
distinct keys, do have color . We show how to estimate
. We also study colored keys with some multiplicity given by
some distribution function. We want to estimate mean an variance of this
distribution. Finally, we consider the case where neither colors nor
multiplicities are known. There we want to estimate the related parameters. An
appendix is devoted to the case where the hashing function provides bits with
probability different from
- …