20,873 research outputs found
On the Computational Power of Radio Channels
Radio networks can be a challenging platform for which to develop distributed algorithms, because the network nodes must contend for a shared channel. In some cases, though, the shared medium is an advantage rather than a disadvantage: for example, many radio network algorithms cleverly use the shared channel to approximate the degree of a node, or estimate the contention. In this paper we ask how far the inherent power of a shared radio channel goes, and whether it can efficiently compute "classicaly hard" functions such as Majority, Approximate Sum, and Parity.
Using techniques from circuit complexity, we show that in many cases, the answer is "no". We show that simple radio channels, such as the beeping model or the channel with collision-detection, can be approximated by a low-degree polynomial, which makes them subject to known lower bounds on functions such as Parity and Majority; we obtain round lower bounds of the form Omega(n^{delta}) on these functions, for delta in (0,1). Next, we use the technique of random restrictions, used to prove AC^0 lower bounds, to prove a tight lower bound of Omega(1/epsilon^2) on computing a (1 +/- epsilon)-approximation to the sum of the nodes\u27 inputs. Our techniques are general, and apply to many types of radio channels studied in the literature
Quantified Derandomization of Linear Threshold Circuits
One of the prominent current challenges in complexity theory is the attempt
to prove lower bounds for , the class of constant-depth, polynomial-size
circuits with majority gates. Relying on the results of Williams (2013), an
appealing approach to prove such lower bounds is to construct a non-trivial
derandomization algorithm for . In this work we take a first step towards
the latter goal, by proving the first positive results regarding the
derandomization of circuits of depth .
Our first main result is a quantified derandomization algorithm for
circuits with a super-linear number of wires. Specifically, we construct an
algorithm that gets as input a circuit over input bits with
depth and wires, runs in almost-polynomial-time, and
distinguishes between the case that rejects at most inputs
and the case that accepts at most inputs. In fact, our
algorithm works even when the circuit is a linear threshold circuit, rather
than just a circuit (i.e., is a circuit with linear threshold gates,
which are stronger than majority gates).
Our second main result is that even a modest improvement of our quantified
derandomization algorithm would yield a non-trivial algorithm for standard
derandomization of all of , and would consequently imply that
. Specifically, if there exists a quantified
derandomization algorithm that gets as input a circuit with depth
and wires (rather than wires), runs in time at
most , and distinguishes between the case that rejects at
most inputs and the case that accepts at most
inputs, then there exists an algorithm with running time
for standard derandomization of .Comment: Changes in this revision: An additional result (a PRG for quantified
derandomization of depth-2 LTF circuits); rewrite of some of the exposition;
minor correction
Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas
We give the best known pseudorandom generators for two touchstone classes in
unconditional derandomization: an -PRG for the class of size-
depth- circuits with seed length , and an -PRG for the class of -sparse
polynomials with seed length . These results bring the state of the art for
unconditional derandomization of these classes into sharp alignment with the
state of the art for computational hardness for all parameter settings:
improving on the seed lengths of either PRG would require breakthrough progress
on longstanding and notorious circuit lower bounds.
The key enabling ingredient in our approach is a new \emph{pseudorandom
multi-switching lemma}. We derandomize recently-developed
\emph{multi}-switching lemmas, which are powerful generalizations of
H{\aa}stad's switching lemma that deal with \emph{families} of depth-two
circuits. Our pseudorandom multi-switching lemma---a randomness-efficient
algorithm for sampling restrictions that simultaneously simplify all circuits
in a family---achieves the parameters obtained by the (full randomness)
multi-switching lemmas of Impagliazzo, Matthews, and Paturi [IMP12] and
H{\aa}stad [H{\aa}s14]. This optimality of our derandomization translates into
the optimality (given current circuit lower bounds) of our PRGs for
and sparse polynomials
Distributed Computing with Adaptive Heuristics
We use ideas from distributed computing to study dynamic environments in
which computational nodes, or decision makers, follow adaptive heuristics (Hart
2005), i.e., simple and unsophisticated rules of behavior, e.g., repeatedly
"best replying" to others' actions, and minimizing "regret", that have been
extensively studied in game theory and economics. We explore when convergence
of such simple dynamics to an equilibrium is guaranteed in asynchronous
computational environments, where nodes can act at any time. Our research
agenda, distributed computing with adaptive heuristics, lies on the borderline
of computer science (including distributed computing and learning) and game
theory (including game dynamics and adaptive heuristics). We exhibit a general
non-termination result for a broad class of heuristics with bounded
recall---that is, simple rules of behavior that depend only on recent history
of interaction between nodes. We consider implications of our result across a
wide variety of interesting and timely applications: game theory, circuit
design, social networks, routing and congestion control. We also study the
computational and communication complexity of asynchronous dynamics and present
some basic observations regarding the effects of asynchrony on no-regret
dynamics. We believe that our work opens a new avenue for research in both
distributed computing and game theory.Comment: 36 pages, four figures. Expands both technical results and discussion
of v1. Revised version will appear in the proceedings of Innovations in
Computer Science 201
Near-optimal small-depth lower bounds for small distance connectivity
We show that any depth- circuit for determining whether an -node graph
has an -to- path of length at most must have size
. The previous best circuit size lower bounds for this
problem were (due to Beame, Impagliazzo, and Pitassi
[BIP98]) and (following from a recent formula size
lower bound of Rossman [Ros14]). Our lower bound is quite close to optimal,
since a simple construction gives depth- circuits of size
for this problem (and strengthening our bound even to
would require proving that undirected connectivity is not in )
Our proof is by reduction to a new lower bound on the size of small-depth
circuits computing a skewed variant of the "Sipser functions" that have played
an important role in classical circuit lower bounds [Sip83, Yao85, H{\aa}s86].
A key ingredient in our proof of the required lower bound for these Sipser-like
functions is the use of \emph{random projections}, an extension of random
restrictions which were recently employed in [RST15]. Random projections allow
us to obtain sharper quantitative bounds while employing simpler arguments,
both conceptually and technically, than in the previous works [Ajt89, BPU92,
BIP98, Ros14]
Understanding the complexity of #SAT using knowledge compilation
Two main techniques have been used so far to solve the #P-hard problem #SAT.
The first one, used in practice, is based on an extension of DPLL for model
counting called exhaustive DPLL. The second approach, more theoretical,
exploits the structure of the input to compute the number of satisfying
assignments by usually using a dynamic programming scheme on a decomposition of
the formula. In this paper, we make a first step toward the separation of these
two techniques by exhibiting a family of formulas that can be solved in
polynomial time with the first technique but needs an exponential time with the
second one. We show this by observing that both techniques implicitely
construct a very specific boolean circuit equivalent to the input formula. We
then show that every beta-acyclic formula can be represented by a polynomial
size circuit corresponding to the first method and exhibit a family of
beta-acyclic formulas which cannot be represented by polynomial size circuits
corresponding to the second method. This result shed a new light on the
complexity of #SAT and related problems on beta-acyclic formulas. As a
byproduct, we give new handy tools to design algorithms on beta-acyclic
hypergraphs
Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy
We consider quantum computations comprising only commuting gates, known as
IQP computations, and provide compelling evidence that the task of sampling
their output probability distributions is unlikely to be achievable by any
efficient classical means. More specifically we introduce the class post-IQP of
languages decided with bounded error by uniform families of IQP circuits with
post-selection, and prove first that post-IQP equals the classical class PP.
Using this result we show that if the output distributions of uniform IQP
circuit families could be classically efficiently sampled, even up to 41%
multiplicative error in the probabilities, then the infinite tower of classical
complexity classes known as the polynomial hierarchy, would collapse to its
third level. We mention some further results on the classical simulation
properties of IQP circuit families, in particular showing that if the output
distribution results from measurements on only O(log n) lines then it may in
fact be classically efficiently sampled.Comment: 13 page
- …