40 research outputs found
Challenges in computational lower bounds
We draw two incomplete, biased maps of challenges in computational complexity
lower bounds
Improved rank bounds for design matrices and a new proof of Kelly's theorem
We study the rank of complex sparse matrices in which the supports of
different columns have small intersections. The rank of these matrices, called
design matrices, was the focus of a recent work by Barak et. al. (BDWY11) in
which they were used to answer questions regarding point configurations. In
this work we derive near-optimal rank bounds for these matrices and use them to
obtain asymptotically tight bounds in many of the geometric applications. As a
consequence of our improved analysis, we also obtain a new, linear algebraic,
proof of Kelly's theorem, which is the complex analog of the Sylvester-Gallai
theorem
Equivalence of Systematic Linear Data Structures and Matrix Rigidity
Recently, Dvir, Golovnev, and Weinstein have shown that sufficiently strong
lower bounds for linear data structures would imply new bounds for rigid
matrices. However, their result utilizes an algorithm that requires an
oracle, and hence, the rigid matrices are not explicit. In this work, we derive
an equivalence between rigidity and the systematic linear model of data
structures. For the -dimensional inner product problem with queries, we
prove that lower bounds on the query time imply rigidity lower bounds for the
query set itself. In particular, an explicit lower bound of
for redundant storage bits would
yield better rigidity parameters than the best bounds due to Alon, Panigrahy,
and Yekhanin. We also prove a converse result, showing that rigid matrices
directly correspond to hard query sets for the systematic linear model. As an
application, we prove that the set of vectors obtained from rank one binary
matrices is rigid with parameters matching the known results for explicit sets.
This implies that the vector-matrix-vector problem requires query time
for redundancy in the systematic linear
model, improving a result of Chakraborty, Kamma, and Larsen. Finally, we prove
a cell probe lower bound for the vector-matrix-vector problem in the high error
regime, improving a result of Chattopadhyay, Kouck\'{y}, Loff, and
Mukhopadhyay.Comment: 23 pages, 1 tabl
Using Elimination Theory to construct Rigid Matrices
The rigidity of a matrix A for target rank r is the minimum number of entries
of A that must be changed to ensure that the rank of the altered matrix is at
most r. Since its introduction by Valiant (1977), rigidity and similar
rank-robustness functions of matrices have found numerous applications in
circuit complexity, communication complexity, and learning complexity. Almost
all nxn matrices over an infinite field have a rigidity of (n-r)^2. It is a
long-standing open question to construct infinite families of explicit matrices
even with superlinear rigidity when r = Omega(n).
In this paper, we construct an infinite family of complex matrices with the
largest possible, i.e., (n-r)^2, rigidity. The entries of an n x n matrix in
this family are distinct primitive roots of unity of orders roughly exp(n^2 log
n). To the best of our knowledge, this is the first family of concrete (but not
entirely explicit) matrices having maximal rigidity and a succinct algebraic
description.
Our construction is based on elimination theory of polynomial ideals. In
particular, we use results on the existence of polynomials in elimination
ideals with effective degree upper bounds (effective Nullstellensatz). Using
elementary algebraic geometry, we prove that the dimension of the affine
variety of matrices of rigidity at most k is exactly n^2-(n-r)^2+k. Finally, we
use elimination theory to examine whether the rigidity function is
semi-continuous.Comment: 25 Pages, minor typos correcte
Faster Walsh-Hadamard Transform and Matrix Multiplication over Finite Fields using Lookup Tables
We use lookup tables to design faster algorithms for important algebraic
problems over finite fields. These faster algorithms, which only use arithmetic
operations and lookup table operations, may help to explain the difficulty of
determining the complexities of these important problems. Our results over a
constant-sized finite field are as follows.
The Walsh-Hadamard transform of a vector of length can be computed using
bit operations. This generalizes to any transform
defined as a Kronecker power of a fixed matrix. By comparison, the Fast
Walsh-Hadamard transform (similar to the Fast Fourier transform) uses arithmetic operations, which is believed to be optimal up to constant
factors.
Any algebraic algorithm for multiplying two matrices using
operations can be converted into an algorithm using bit operations. For example, Strassen's algorithm can
be converted into an algorithm using bit
operations. It remains an open problem with practical implications to determine
the smallest constant such that Strassen's algorithm can be implemented to
use arithmetic operations; using a lookup
table allows one to save a super-constant factor in bit operations.Comment: 10 pages, to appear in the 6th Symposium on Simplicity in Algorithms
(SOSA 2023
Static Data Structure Lower Bounds Imply Rigidity
We show that static data structure lower bounds in the group (linear) model
imply semi-explicit lower bounds on matrix rigidity. In particular, we prove
that an explicit lower bound of on the cell-probe
complexity of linear data structures in the group model, even against
arbitrarily small linear space , would already imply a
semi-explicit () construction of rigid matrices with
significantly better parameters than the current state of art (Alon, Panigrahy
and Yekhanin, 2009). Our results further assert that polynomial () data structure lower bounds against near-optimal space, would
imply super-linear circuit lower bounds for log-depth linear circuits (a
four-decade open question). In the succinct space regime , we show
that any improvement on current cell-probe lower bounds in the linear model
would also imply new rigidity bounds. Our results rely on a new connection
between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak,
2006), and on a new reduction from worst-case to average-case rigidity, which
is of independent interest
Characterization and Lower Bounds for Branching Program Size using Projective Dimension
We study projective dimension, a graph parameter (denoted by pd for a
graph ), introduced by (Pudl\'ak, R\"odl 1992), who showed that proving
lower bounds for pd for bipartite graphs associated with a Boolean
function imply size lower bounds for branching programs computing .
Despite several attempts (Pudl\'ak, R\"odl 1992 ; Babai, R\'{o}nyai, Ganapathy
2000), proving super-linear lower bounds for projective dimension of explicit
families of graphs has remained elusive.
We show that there exist a Boolean function (on bits) for which the
gap between the projective dimension and size of the optimal branching program
computing (denoted by bpsize), is . Motivated by the
argument in (Pudl\'ak, R\"odl 1992), we define two variants of projective
dimension - projective dimension with intersection dimension 1 (denoted by
upd) and bitwise decomposable projective dimension (denoted by
bitpdim).
As our main result, we show that there is an explicit family of graphs on vertices such that the projective dimension is , the
projective dimension with intersection dimension is and the
bitwise decomposable projective dimension is .
We also show that there exist a Boolean function (on bits) for which
the gap between upd and bpsize is . In contrast, we
also show that the bitwise decomposable projective dimension characterizes size
of the branching program up to a polynomial factor. That is, there exists a
constant and for any function , . We also study two other
variants of projective dimension and show that they are exactly equal to
well-studied graph parameters - bipartite clique cover number and bipartite
partition number respectively.Comment: 24 pages, 3 figure