9 research outputs found
Tight Bounds for Sketching the Operator Norm, Schatten Norms, and Subspace Embeddings
We consider the following oblivious sketching problem: given epsilon in (0,1/3) and n >= d/epsilon^2, design a distribution D over R^{k * nd} and a function f: R^k * R^{nd} -> R}, so that for any n * d matrix A, Pr_{S sim D} [(1-epsilon) |A|_{op} = 2/3, where |A|_{op} = sup_{x:|x|_2 = 1} |Ax|_2 is the operator norm of A and S(A) denotes S * A, interpreting A as a vector in R^{nd}. We show a tight lower bound of k = Omega(d^2/epsilon^2) for this problem. Previously, Nelson and Nguyen (ICALP, 2014) considered the problem of finding a distribution D over R^{k * n} such that for any n * d matrix A, Pr_{S sim D}[forall x, (1-epsilon)|Ax|_2 = 2/3, which is called an oblivious subspace embedding (OSE). Our result considerably strengthens theirs, as it (1) applies only to estimating the operator norm, which can be estimated given any OSE, and (2) applies to distributions over general linear operators S which treat A as a vector and compute S(A), rather than the restricted class of linear operators corresponding to matrix multiplication. Our technique also implies the first tight bounds for approximating the Schatten p-norm for even integers p via general linear sketches, improving the previous lower bound from k = Omega(n^{2-6/p}) [Regev, 2014] to k = Omega(n^{2-4/p}). Importantly, for sketching the operator norm up to a factor of alpha, where alpha - 1 = Omega(1), we obtain a tight k = Omega(n^2/alpha^4) bound, matching the upper bound of Andoni and Nguyen (SODA, 2013), and improving the previous k = Omega(n^2/alpha^6) lower bound. Finally, we also obtain the first lower bounds for approximating Ky Fan norms
Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and Graph Problems
We consider the general problem of learning about a matrix through vector-matrix-vector queries. These queries provide the value of u^{T}Mv over a fixed field ? for a specified pair of vectors u,v ? ??. To motivate these queries, we observe that they generalize many previously studied models, such as independent set queries, cut queries, and standard graph queries. They also specialize the recently studied matrix-vector query model. Our work is exploratory and broad, and we provide new upper and lower bounds for a wide variety of problems, spanning linear algebra, statistics, and graphs. Many of our results are nearly tight, and we use diverse techniques from linear algebra, randomized algorithms, and communication complexity
Hilbert geometry of the Siegel disk: The Siegel-Klein disk model
We study the Hilbert geometry induced by the Siegel disk domain, an open
bounded convex set of complex square matrices of operator norm strictly less
than one. This Hilbert geometry yields a generalization of the Klein disk model
of hyperbolic geometry, henceforth called the Siegel-Klein disk model to
differentiate it with the classical Siegel upper plane and disk domains. In the
Siegel-Klein disk, geodesics are by construction always unique and Euclidean
straight, allowing one to design efficient geometric algorithms and
data-structures from computational geometry. For example, we show how to
approximate the smallest enclosing ball of a set of complex square matrices in
the Siegel disk domains: We compare two generalizations of the iterative
core-set algorithm of Badoiu and Clarkson (BC) in the Siegel-Poincar\'e disk
and in the Siegel-Klein disk: We demonstrate that geometric computing in the
Siegel-Klein disk allows one (i) to bypass the time-costly recentering
operations to the disk origin required at each iteration of the BC algorithm in
the Siegel-Poincar\'e disk model, and (ii) to approximate fast and numerically
the Siegel-Klein distance with guaranteed lower and upper bounds derived from
nested Hilbert geometries.Comment: 42 pages, 7 figure
Lower Bounds on Adaptive Sensing for Matrix Recovery
We study lower bounds on adaptive sensing algorithms for recovering low rank
matrices using linear measurements. Given an matrix , a general
linear measurement , for an matrix , is just the inner
product of and , each treated as -dimensional vectors. By
performing as few linear measurements as possible on a rank- matrix , we
hope to construct a matrix that satisfies , for a small constant . It is commonly assumed that when
measuring with , the response is corrupted with an independent Gaussian
random variable of mean and variance . Cand\'es and Plan study
non-adaptive algorithms for low rank matrix recovery using random linear
measurements.
At a certain noise level, it is known that their non-adaptive algorithms need
to perform measurements, which amounts to reading the entire
matrix. An important question is whether adaptivity helps in decreasing the
overall number of measurements. We show that any adaptive algorithm that uses
linear measurements in each round and outputs an approximation to the
underlying matrix with probability must run for rounds showing that any adaptive algorithm
which uses linear measurements in each round must run for
rounds to compute a reconstruction with probability
. Hence any adaptive algorithm that has rounds
must use an overall linear measurements. Our techniques also
readily extend to obtain lower bounds on adaptive algorithms for tensor
recovery and obtain measurement-vs-rounds trade-off for many sensing problems
in numerical linear algebra, such as spectral norm low rank approximation,
Frobenius norm low rank approximation, singular vector approximation, and more.Comment: Fixed minor typo