332 research outputs found
Circuits with arbitrary gates for random operators
We consider boolean circuits computing n-operators f:{0,1}^n --> {0,1}^n. As
gates we allow arbitrary boolean functions; neither fanin nor fanout of gates
is restricted. An operator is linear if it computes n linear forms, that is,
computes a matrix-vector product y=Ax over GF(2). We prove the existence of
n-operators requiring about n^2 wires in any circuit, and linear n-operators
requiring about n^2/\log n wires in depth-2 circuits, if either all output
gates or all gates on the middle layer are linear.Comment: 7 page
Fault-Tolerant Circuit-Switching Networks
The authors consider fault-tolerant circuit-switching networks under a random switch failure model. Three circuit-switching networks of theoretical importance—nonblocking networks, rearrangeable networks, and superconcentrators—are studied. The authors prove lower bounds for the size (the number of switches) and depth (the largest number of switches on a communication path) of such fault-tolerant networks and explicitly construct such networks with optimal size Θ( n (log n)2 ) and depth Θ( log n )
Min-Rank Conjecture for Log-Depth Circuits
A completion of an m-by-n matrix A with entries in {0,1,*} is obtained by
setting all *-entries to constants 0 or 1. A system of semi-linear equations
over GF(2) has the form Mx=f(x), where M is a completion of A and f:{0,1}^n -->
{0,1}^m is an operator, the i-th coordinate of which can only depend on
variables corresponding to *-entries in the i-th row of A. We conjecture that
no such system can have more than 2^{n-c\cdot mr(A)} solutions, where c>0 is an
absolute constant and mr(A) is the smallest rank over GF(2) of a completion of
A. The conjecture is related to an old problem of proving super-linear lower
bounds on the size of log-depth boolean circuits computing linear operators x
--> Mx. The conjecture is also a generalization of a classical question about
how much larger can non-linear codes be than linear ones. We prove some special
cases of the conjecture and establish some structural properties of solution
sets.Comment: 22 pages, to appear in: J. Comput.Syst.Sci
Lower Bounds for Matrix Factorization
We study the problem of constructing explicit families of matrices which
cannot be expressed as a product of a few sparse matrices. In addition to being
a natural mathematical question on its own, this problem appears in various
incarnations in computer science; the most significant being in the context of
lower bounds for algebraic circuits which compute linear transformations,
matrix rigidity and data structure lower bounds.
We first show, for every constant , a deterministic construction in
subexponential time of a family of matrices which cannot
be expressed as a product where the total sparsity of
is less than . In other words, any depth-
linear circuit computing the linear transformation has size at
least . This improves upon the prior best lower bounds for
this problem, which are barely super-linear, and were obtained by a long line
of research based on the study of super-concentrators (albeit at the cost of a
blow up in the time required to construct these matrices).
We then outline an approach for proving improved lower bounds through a
certain derandomization problem, and use this approach to prove asymptotically
optimal quadratic lower bounds for natural special cases, which generalize many
of the common matrix decompositions
Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling
We establish an exactly tight relation between reversible pebblings of graphs
and Nullstellensatz refutations of pebbling formulas, showing that a graph
can be reversibly pebbled in time and space if and only if there is a
Nullstellensatz refutation of the pebbling formula over in size and
degree (independently of the field in which the Nullstellensatz refutation
is made). We use this correspondence to prove a number of strong size-degree
trade-offs for Nullstellensatz, which to the best of our knowledge are the
first such results for this proof system
Size bounds and parallel algorithms for networks
SIGLEAvailable from British Library Document Supply Centre- DSC:D34009/81 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
More on a problem of Zarankiewicz
We show tight necessary and sufficient conditions on the sizes of small bipartite graphs whose union is a larger bipartite graph that has no large bipartite independent set. Our main result is a common generalization of two classical results in graph theory: the theorem of Kovari, Sos and Turan on the minimum number of edges in a bipartite graph that has no large independent set, and the theorem of Hansel (also Katona and Szemeredi and Krichevskii) on the sum of the sizes of bipartite graphs that can be used to construct a graph (non-necessarily bipartite) that has no large independent set. Our results unify the underlying combinatorial principles developed in the proof of tight lower bounds for depth-two superconcentrators
Approximating Cumulative Pebbling Cost Is Unique Games Hard
The cumulative pebbling complexity of a directed acyclic graph is defined
as , where the minimum is taken over all
legal (parallel) black pebblings of and denotes the number of
pebbles on the graph during round . Intuitively, captures
the amortized Space-Time complexity of pebbling copies of in parallel.
The cumulative pebbling complexity of a graph is of particular interest in
the field of cryptography as is tightly related to the
amortized Area-Time complexity of the Data-Independent Memory-Hard Function
(iMHF) [AS15] defined using a constant indegree directed acyclic
graph (DAG) and a random oracle . A secure iMHF should have
amortized Space-Time complexity as high as possible, e.g., to deter brute-force
password attacker who wants to find such that . Thus, to
analyze the (in)security of a candidate iMHF , it is crucial to
estimate the value but currently, upper and lower bounds for
leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou
recently showed that it is -Hard to compute , but
their techniques do not even rule out an efficient
-approximation algorithm for any constant . We
show that for any constant , it is Unique Games hard to approximate
to within a factor of .
(See the paper for the full abstract.)Comment: 28 pages, updated figures and corrected typo
- …