85 research outputs found
Derandomized Construction of Combinatorial Batch Codes
Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes
introduced by Ishai et al. in STOC 2004, abstracts the following data
distribution problem: data items are to be replicated among servers in
such a way that any of the data items can be retrieved by reading at
most one item from each server with the total amount of storage over
servers restricted to . Given parameters and , where and
are constants, one of the challenging problems is to construct -uniform CBCs
(CBCs where each data item is replicated among exactly servers) which
maximizes the value of . In this work, we present explicit construction of
-uniform CBCs with data items. The
construction has the property that the servers are almost regular, i.e., number
of data items stored in each server is in the range . The
construction is obtained through better analysis and derandomization of the
randomized construction presented by Ishai et al. Analysis reveals almost
regularity of the servers, an aspect that so far has not been addressed in the
literature. The derandomization leads to explicit construction for a wide range
of values of (for given and ) where no other explicit construction
with similar parameters, i.e., with , is
known. Finally, we discuss possibility of parallel derandomization of the
construction
Quantified Derandomization of Linear Threshold Circuits
One of the prominent current challenges in complexity theory is the attempt
to prove lower bounds for , the class of constant-depth, polynomial-size
circuits with majority gates. Relying on the results of Williams (2013), an
appealing approach to prove such lower bounds is to construct a non-trivial
derandomization algorithm for . In this work we take a first step towards
the latter goal, by proving the first positive results regarding the
derandomization of circuits of depth .
Our first main result is a quantified derandomization algorithm for
circuits with a super-linear number of wires. Specifically, we construct an
algorithm that gets as input a circuit over input bits with
depth and wires, runs in almost-polynomial-time, and
distinguishes between the case that rejects at most inputs
and the case that accepts at most inputs. In fact, our
algorithm works even when the circuit is a linear threshold circuit, rather
than just a circuit (i.e., is a circuit with linear threshold gates,
which are stronger than majority gates).
Our second main result is that even a modest improvement of our quantified
derandomization algorithm would yield a non-trivial algorithm for standard
derandomization of all of , and would consequently imply that
. Specifically, if there exists a quantified
derandomization algorithm that gets as input a circuit with depth
and wires (rather than wires), runs in time at
most , and distinguishes between the case that rejects at
most inputs and the case that accepts at most
inputs, then there exists an algorithm with running time
for standard derandomization of .Comment: Changes in this revision: An additional result (a PRG for quantified
derandomization of depth-2 LTF circuits); rewrite of some of the exposition;
minor correction
Derandomization with Minimal Memory Footprint
Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1.
Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before
Explicit near-Ramanujan graphs of every degree
For every constant and , we give a deterministic
-time algorithm that outputs a -regular graph on
vertices that is -near-Ramanujan; i.e., its eigenvalues
are bounded in magnitude by (excluding the single
trivial eigenvalue of~).Comment: 26 page
Deterministic Replacement Path Covering
In this article, we provide a unified and simplified approach to derandomize
central results in the area of fault-tolerant graph algorithms. Given a graph
, a vertex pair , and a set of edge faults , a replacement path is an - shortest path in
. For integer parameters , a replacement path covering
(RPC) is a collection of subgraphs of , denoted by
, such that for every set of at
most faults (i.e., ) and every replacement path of at
most edges, there exists a subgraph that
contains all the edges of and does not contain any of the edges of . The
covering value of the RPC is then defined to be the number
of subgraphs in .
We present efficient deterministic constructions of -RPCs whose
covering values almost match the randomized ones, for a wide range of
parameters. Our time and value bounds improve considerably over the previous
construction of Parter (DISC 2019). We also provide an almost matching lower
bound for the value of these coverings. A key application of our above
deterministic constructions is the derandomization of the algebraic
construction of the distance sensitivity oracle by Weimann and Yuster (FOCS
2010). The preprocessing and query time of the our deterministic algorithm
nearly match the randomized bounds. This resolves the open problem of Alon,
Chechik and Cohen (ICALP 2019)
Explicit Correlation Amplifiers for Finding Outlier Correlations in Deterministic Subquadratic Time
We derandomize G. Valiant\u27s [J.ACM 62(2015) Art.13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant\u27s randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math 155(2002), 157-187]. We say that a function f:{-1,1}^d ->{-1,1}^D is a correlation amplifier with threshold 0 = 1, and strength p an even positive integer if for all pairs of vectors x,y in {-1,1}^d it holds that (i) ||| | >= tau*d implies (/gamma^d})^p*D /d)^p*D
SoftSpokenOT: Communication--Computation Tradeoffs in OT Extension
Given a small number of base oblivious transfers (OTs), how does one generate a large number of extended OTs as efficiently as possible? The answer has long been the seminal work of IKNP (Ishai et al., Crypto 2003) and the family of protocols it inspired, which only use Minicrypt assumptions. Recently, Boyle et al. (Crypto 2019) proposed the Silent-OT technique that improves on IKNP, but at the cost of a much stronger, non-Minicrypt assumption: the learning parity with noise (LPN) assumption. We present SoftSpokenOT, the first OT extension to improve on IKNP\u27s communication cost in the Minicrypt model. While IKNP requires security parameter bits of communication for each OT, SoftSpokenOT only needs bits, for any , at the expense of requiring times the computation. For small values of , this tradeoff is favorable since IKNP-style protocols are network-bound. We implemented SoftSpokenOT and found that our protocol gives almost a speedup over IKNP in the LAN setting.
Our technique is based on a novel silent protocol for vector oblivious linear evaluation (VOLE) over polynomial-sized fields. We created a framework to build maliciously secure 1-of-N OT extension from this VOLE, revisiting the existing work for each step. Along the way, we found several flaws in the existing work, including a practical attack against the consistency check of Patra et al. (NDSS 2017), while also making some improvements
- …