750 research outputs found
Gossip vs. Markov Chains, and Randomness-Efficient Rumor Spreading
We study gossip algorithms for the rumor spreading problem which asks one
node to deliver a rumor to all nodes in an unknown network. We present the
first protocol for any expander graph with nodes such that, the
protocol informs every node in rounds with high probability, and
uses random bits in total. The runtime of our protocol is
tight, and the randomness requirement of random bits almost
matches the lower bound of random bits for dense graphs. We
further show that, for many graph families, polylogarithmic number of random
bits in total suffice to spread the rumor in rounds.
These results together give us an almost complete understanding of the
randomness requirement of this fundamental gossip process.
Our analysis relies on unexpectedly tight connections among gossip processes,
Markov chains, and branching programs. First, we establish a connection between
rumor spreading processes and Markov chains, which is used to approximate the
rumor spreading time by the mixing time of Markov chains. Second, we show a
reduction from rumor spreading processes to branching programs, and this
reduction provides a general framework to derandomize gossip processes. In
addition to designing rumor spreading protocols, these novel techniques may
have applications in studying parallel and multiple random walks, and
randomness complexity of distributed algorithms.Comment: 41 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1304.135
Deterministic Approximation of Random Walks in Small Space
We give a deterministic, nearly logarithmic-space algorithm that given an undirected graph G, a positive integer r, and a set S of vertices, approximates the conductance of S in the r-step random walk on G to within a factor of 1+epsilon, where epsilon>0 is an arbitrarily small constant. More generally, our algorithm computes an epsilon-spectral approximation to the normalized Laplacian of the r-step walk.
Our algorithm combines the derandomized square graph operation [Eyal Rozenman and Salil Vadhan, 2005], which we recently used for solving Laplacian systems in nearly logarithmic space [Murtagh et al., 2017], with ideas from [Cheng et al., 2015], which gave an algorithm that is time-efficient (while ours is space-efficient) and randomized (while ours is deterministic) for the case of even r (while ours works for all r). Along the way, we provide some new results that generalize technical machinery and yield improvements over previous work. First, we obtain a nearly linear-time randomized algorithm for computing a spectral approximation to the normalized Laplacian for odd r. Second, we define and analyze a generalization of the derandomized square for irregular graphs and for sparsifying the product of two distinct graphs. As part of this generalization, we also give a strongly explicit construction of expander graphs of every size
07411 Abstracts Collection -- Algebraic Methods in Computational Complexity
From 07.10. to 12.10., the Dagstuhl Seminar 07411 ``Algebraic Methods in Computational Complexity\u27\u27 was held in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Small-Bias Sets for Nonabelian Groups: Derandomizing the Alon-Roichman Theorem
In analogy with epsilon-biased sets over Z_2^n, we construct explicit
epsilon-biased sets over nonabelian finite groups G. That is, we find sets S
subset G such that | Exp_{x in S} rho(x)| <= epsilon for any nontrivial
irreducible representation rho. Equivalently, such sets make G's Cayley graph
an expander with eigenvalue |lambda| <= epsilon. The Alon-Roichman theorem
shows that random sets of size O(log |G| / epsilon^2) suffice. For groups of
the form G = G_1 x ... x G_n, our construction has size poly(max_i |G_i|, n,
epsilon^{-1}), and we show that a set S \subset G^n considered by Meka and
Zuckerman that fools read-once branching programs over G is also epsilon-biased
in this sense. For solvable groups whose abelian quotients have constant
exponent, we obtain epsilon-biased sets of size (log |G|)^{1+o(1)}
poly(epsilon^{-1}). Our techniques include derandomized squaring (in both the
matrix product and tensor product senses) and a Chernoff-like bound on the
expected norm of the product of independently random operators that may be of
independent interest.Comment: Our results on solvable groups have been significantly improved,
giving eps-biased sets of polynomial (as opposed to quasipolynomial) siz
Arithmetic circuits: the chasm at depth four gets wider
In their paper on the "chasm at depth four", Agrawal and Vinay have shown
that polynomials in m variables of degree O(m) which admit arithmetic circuits
of size 2^o(m) also admit arithmetic circuits of depth four and size 2^o(m).
This theorem shows that for problems such as arithmetic circuit lower bounds or
black-box derandomization of identity testing, the case of depth four circuits
is in a certain sense the general case. In this paper we show that smaller
depth four circuits can be obtained if we start from polynomial size arithmetic
circuits. For instance, we show that if the permanent of n*n matrices has
circuits of size polynomial in n, then it also has depth 4 circuits of size
n^O(sqrt(n)*log(n)). Our depth four circuits use integer constants of
polynomial size. These results have potential applications to lower bounds and
deterministic identity testing, in particular for sums of products of sparse
univariate polynomials. We also give an application to boolean circuit
complexity, and a simple (but suboptimal) reduction to polylogarithmic depth
for arithmetic circuits of polynomial size and polynomially bounded degree
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developments are related to diverse mathematical ïŹelds such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
Derandomization with Minimal Memory Footprint
Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1.
Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before
- âŠ