10 research outputs found
Recommended from our members
Fourier and Circulant Matrices Are Not Rigid
The concept of matrix rigidity was first introduced by Valiant in [Friedman, 1993]. Roughly speaking, a matrix is rigid if its rank cannot be reduced significantly by changing a small number of entries. There has been extensive interest in rigid matrices as Valiant showed in [Friedman, 1993] that rigidity can be used to prove arithmetic circuit lower bounds.
In a surprising result, Alman and Williams showed that the (real valued) Hadamard matrix, which was conjectured to be rigid, is actually not very rigid. This line of work was extended by [Dvir and Edelman, 2017] to a family of matrices related to the Hadamard matrix, but over finite fields. In our work, we take another step in this direction and show that for any abelian group G and function f:G - > {C}, the matrix given by M_{xy} = f(x - y) for x,y in G is not rigid. In particular, we get that complex valued Fourier matrices, circulant matrices, and Toeplitz matrices are all not rigid and cannot be used to carry out Valiant\u27s approach to proving circuit lower bounds. This complements a recent result of Goldreich and Tal [Goldreich and Tal, 2016] who showed that Toeplitz matrices are nontrivially rigid (but not enough for Valiant\u27s method). Our work differs from previous non-rigidity results in that those works considered matrices whose underlying group of symmetries was of the form {F}_p^n with p fixed and n tending to infinity, while in the families of matrices we study, the underlying group of symmetries can be any abelian group and, in particular, the cyclic group {Z}_N, which has very different structure. Our results also suggest natural new candidates for rigidity in the form of matrices whose symmetry groups are highly non-abelian.
Our proof has four parts. The first extends the results of [Josh Alman and Ryan Williams, 2016; Dvir and Edelman, 2017] to generalized Hadamard matrices over the complex numbers via a new proof technique. The second part handles the N x N Fourier matrix when N has a particularly nice factorization that allows us to embed smaller copies of (generalized) Hadamard matrices inside of it. The third part uses results from number theory to bootstrap the non-rigidity for these special values of N and extend to all sufficiently large N. The fourth and final part involves using the non-rigidity of the Fourier matrix to show that the group algebra matrix, given by M_{xy} = f(x - y) for x,y in G, is not rigid for any function f and abelian group G
Equivalence of Systematic Linear Data Structures and Matrix Rigidity
Recently, Dvir, Golovnev, and Weinstein have shown that sufficiently strong
lower bounds for linear data structures would imply new bounds for rigid
matrices. However, their result utilizes an algorithm that requires an
oracle, and hence, the rigid matrices are not explicit. In this work, we derive
an equivalence between rigidity and the systematic linear model of data
structures. For the -dimensional inner product problem with queries, we
prove that lower bounds on the query time imply rigidity lower bounds for the
query set itself. In particular, an explicit lower bound of
for redundant storage bits would
yield better rigidity parameters than the best bounds due to Alon, Panigrahy,
and Yekhanin. We also prove a converse result, showing that rigid matrices
directly correspond to hard query sets for the systematic linear model. As an
application, we prove that the set of vectors obtained from rank one binary
matrices is rigid with parameters matching the known results for explicit sets.
This implies that the vector-matrix-vector problem requires query time
for redundancy in the systematic linear
model, improving a result of Chakraborty, Kamma, and Larsen. Finally, we prove
a cell probe lower bound for the vector-matrix-vector problem in the high error
regime, improving a result of Chattopadhyay, Kouck\'{y}, Loff, and
Mukhopadhyay.Comment: 23 pages, 1 tabl
Improved Upper Bounds for the Rigidity of Kronecker Products
The rigidity of a matrix for target rank is the minimum number of
entries of that need to be changed in order to obtain a matrix of rank at
most . At MFCS'77, Valiant introduced matrix rigidity as a tool to prove
circuit lower bounds for linear functions and since then this notion received
much attention and found applications in other areas of complexity theory. The
problem of constructing an explicit family of matrices that are sufficiently
rigid for Valiant's reduction (Valiant-rigid) still remains open. Moreover,
since 2017 most of the long-studied candidates have been shown not to be
Valiant-rigid. Some of those former candidates for rigidity are Kronecker
products of small matrices.
In a recent paper (STOC'21), Alman gave a general non-rigidity result for
such matrices: he showed that if an matrix (over any field) is
a Kronecker product of matrices (so )
then changing only entries of one can reduce
its rank to , where is roughly
.
In this note we improve this result in two directions. First, we do not
require the matrices to have equal size. Second, we reduce
from exponential in to roughly (where is the
maximum size of the matrices ), and to nearly linear (roughly
) for matrices of sizes within a constant factor of each
other.
As an application of our results we significantly expand the class of
Hadamard matrices that are known not to be Valiant-rigid; these now include the
Kronecker products of Paley-Hadamard matrices and Hadamard matrices of bounded
size.Comment: To appear at MFCS'21. This version includes rigidity bounds for
Hadamard matrices (Section 6), which were not present in the previous arxiv
version. 20 page
The (Generalized) Orthogonality Dimension of (Generalized) Kneser Graphs: Bounds and Applications
The orthogonality dimension of a graph over a field is
the smallest integer for which there exists an assignment of a vector with to every vertex , such that whenever and are
adjacent vertices in . The study of the orthogonality dimension of graphs is
motivated by various application in information theory and in theoretical
computer science. The contribution of the present work is two-folded.
First, we prove that there exists a constant such that for every
sufficiently large integer , it is -hard to decide whether the
orthogonality dimension of an input graph over is at most or
at least . At the heart of the proof lies a geometric result, which
might be of independent interest, on a generalization of the orthogonality
dimension parameter for the family of Kneser graphs, analogously to a
long-standing conjecture of Stahl (J. Comb. Theo. Ser. B, 1976).
Second, we study the smallest possible orthogonality dimension over finite
fields of the complement of graphs that do not contain certain fixed subgraphs.
In particular, we provide an explicit construction of triangle-free -vertex
graphs whose complement has orthogonality dimension over the binary field at
most for some constant . Our results involve
constructions from the family of generalized Kneser graphs and they are
motivated by the rigidity approach to circuit lower bounds. We use them to
answer a couple of questions raised by Codenotti, Pudl\'{a}k, and Resta (Theor.
Comput. Sci., 2000), and in particular, to disprove their Odd Alternating Cycle
Conjecture over every finite field.Comment: 19 page
Lower Bounds for Matrix Factorization
We study the problem of constructing explicit families of matrices which
cannot be expressed as a product of a few sparse matrices. In addition to being
a natural mathematical question on its own, this problem appears in various
incarnations in computer science; the most significant being in the context of
lower bounds for algebraic circuits which compute linear transformations,
matrix rigidity and data structure lower bounds.
We first show, for every constant , a deterministic construction in
subexponential time of a family of matrices which cannot
be expressed as a product where the total sparsity of
is less than . In other words, any depth-
linear circuit computing the linear transformation has size at
least . This improves upon the prior best lower bounds for
this problem, which are barely super-linear, and were obtained by a long line
of research based on the study of super-concentrators (albeit at the cost of a
blow up in the time required to construct these matrices).
We then outline an approach for proving improved lower bounds through a
certain derandomization problem, and use this approach to prove asymptotically
optimal quadratic lower bounds for natural special cases, which generalize many
of the common matrix decompositions
Matrix Rigidity Depends on the Target Field
The rigidity of a matrix A for target rank r is the minimum number of entries of A that need to be changed in order to obtain a matrix of rank at most r (Valiant, 1977).
We study the dependence of rigidity on the target field. We consider especially two natural regimes: when one is allowed to make changes only from the field of definition of the matrix ("strict rigidity"), and when the changes are allowed to be in an arbitrary extension field ("absolute rigidity").
We demonstrate, apparently for the first time, a separation between these two concepts. We establish a gap of a factor of 3/2-o(1) between strict and absolute rigidities.
The question seems especially timely because of recent results by Dvir and Liu (Theory of Computing, 2020) where important families of matrices, previously expected to be rigid, are shown not to be absolutely rigid, while their strict rigidity remains open. Our lower-bound method combines elementary arguments from algebraic geometry with "untouched minors" arguments.
Finally, we point out that more families of long-time rigidity candidates fall as a consequence of the results of Dvir and Liu. These include the incidence matrices of projective planes over finite fields, proposed by Valiant as candidates for rigidity over ??
Rigid Matrices From Rectangular PCPs
We introduce a variant of PCPs, that we refer to as rectangular PCPs, wherein
proofs are thought of as square matrices, and the random coins used by the
verifier can be partitioned into two disjoint sets, one determining the row of
each query and the other determining the column.
We construct PCPs that are efficient, short, smooth and (almost-)rectangular.
As a key application, we show that proofs for hard languages in ,
when viewed as matrices, are rigid infinitely often. This strengthens and
simplifies a recent result of Alman and Chen [FOCS, 2019] constructing explicit
rigid matrices in FNP. Namely, we prove the following theorem:
- There is a constant such that there is an FNP-machine
that, for infinitely many , on input outputs matrices
with entries in that are -far (in Hamming distance)
from matrices of rank at most .
Our construction of rectangular PCPs starts with an analysis of how
randomness yields queries in the Reed--Muller-based outer PCP of Ben-Sasson,
Goldreich, Harsha, Sudan and Vadhan [SICOMP, 2006; CCC, 2005]. We then show how
to preserve rectangularity under PCP composition and a smoothness-inducing
transformation. This warrants refined and stronger notions of rectangularity,
which we prove for the outer PCP and its transforms.Comment: 36 pages, 3 figure