173 research outputs found
Stochastic Block Model and Community Detection in the Sparse Graphs: A spectral algorithm with optimal rate of recovery
In this paper, we present and analyze a simple and robust spectral algorithm
for the stochastic block model with blocks, for any fixed. Our
algorithm works with graphs having constant edge density, under an optimal
condition on the gap between the density inside a block and the density between
the blocks. As a co-product, we settle an open question posed by Abbe et. al.
concerning censor block models
Elastic Cash
Elastic Cash is a new decentralized mechanism for regulating the money
supply. The mechanism operates by modifying the supply so that an interest rate
determined by a public market is kept approximately fixed. It can be
incorporated into the conventional monetary system to improve the elasticity of
the US Dollar, and it can be used to design new elastic cryptocurrencies that
remain decentralized
Exponential Separation of Quantum Communication and Classical Information
We exhibit a Boolean function for which the quantum communication complexity
is exponentially larger than the classical information complexity. An
exponential separation in the other direction was already known from the work
of Kerenidis et. al. [SICOMP 44, pp. 1550-1572], hence our work implies that
these two complexity measures are incomparable. As classical information
complexity is an upper bound on quantum information complexity, which in turn
is equal to amortized quantum communication complexity, our work implies that a
tight direct sum result for distributional quantum communication complexity
cannot hold. The function we use to present such a separation is the Symmetric
k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057],
whose classical communication complexity is exponentially larger than its
classical information complexity. In this paper, we show that the quantum
communication complexity of this function is polynomially equivalent to its
classical communication complexity. The high-level idea behind our proof is
arguably the simplest so far for such an exponential separation between
information and communication, driven by a sequence of round-elimination
arguments, allowing us to simplify further the approach of Rao and Sinha.
As another application of the techniques that we develop, we give a simple
proof for an optimal trade-off between Alice's and Bob's communication while
computing the related Greater-Than function on n bits: say Bob communicates at
most b bits, then Alice must send n/exp(O(b)) bits to Bob. This holds even when
allowing pre-shared entanglement. We also present a classical protocol
achieving this bound.Comment: v1, 36 pages, 3 figure
Circuits with Medium Fan-In
We consider boolean circuits in which every gate may compute an arbitrary boolean function of k other gates, for a parameter k. We give an explicit function $f:{0,1}^n -> {0,1} that requires at least Omega(log^2(n)) non-input gates when k = 2n/3. When the circuit is restricted to being layered and depth 2, we prove a lower bound of n^(Omega(1)) on the number of non-input gates. When the circuit is a formula with gates of fan-in k, we give a lower bound Omega(n^2/k*log(n)) on the total number of gates.
Our model is connected to some well known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the Number-On-Forehead model in communication complexity, or for bounded depth circuits in AC_0, or extractors for varieties over small fields would imply strong lower bounds in our model. On the other hand, new lower bounds for our model would prove new time-space tradeoffs for branching programs and impossibility results for (fan-in 2) circuits with linear size and logarithmic depth. In particular, our lower bound gives a different proof for a known time-space tradeoff for oblivious branching programs
Simplified Lower Bounds on the Multiparty Communication Complexity of Disjointness
We show that the deterministic number-on-forehead communication complexity of set disjointness for k parties on a universe of size n is Omega(n/4^k). This gives the first lower bound that is linear in n, nearly matching Grolmusz\u27s upper bound of O(log^2(n) + k^2n/2^k). We also simplify the proof of Sherstov\u27s Omega(sqrt(n)/(k2^k)) lower bound for the randomized communication complexity of set disjointness
A Direct-Sum Theorem for Read-Once Branching Programs
We study a direct-sum question for read-once branching programs. If M(f) denotes the minimum average memory required to compute a function f(x_1,x_2, ..., x_n) how much memory is required to compute f on k independent inputs that arrive in parallel? We show that when the inputs are sampled independently from some domain X and M(f) = Omega(n), then computing the value of f on k streams requires average memory at least Omega(k * M(f)/n).
Our results are obtained by defining new ways to measure the information complexity of read-once branching programs. We define two such measures: the transitional and cumulative information content. We prove that any read-once branching program with transitional information content I can be simulated using average memory O(n(I+1)). On the other hand, if every read-once branching program with cumulative information content I can be simulated with average memory O(I+1), then computing f on k inputs requires average memory at least Omega(k * (M(f)-1))
- …