3,807 research outputs found
Improved bounds on sample size for implicit matrix trace estimators
This article is concerned with Monte-Carlo methods for the estimation of the
trace of an implicitly given matrix whose information is only available
through matrix-vector products. Such a method approximates the trace by an
average of expressions of the form \ww^t (A\ww), with random vectors
\ww drawn from an appropriate distribution. We prove, discuss and experiment
with bounds on the number of realizations required in order to guarantee a
probabilistic bound on the relative error of the trace estimation upon
employing Rademacher (Hutchinson), Gaussian and uniform unit vector (with and
without replacement) probability distributions.
In total, one necessary bound and six sufficient bounds are proved, improving
upon and extending similar estimates obtained in the seminal work of Avron and
Toledo (2011) in several dimensions. We first improve their bound on for
the Hutchinson method, dropping a term that relates to and making the
bound comparable with that for the Gaussian estimator.
We further prove new sufficient bounds for the Hutchinson, Gaussian and the
unit vector estimators, as well as a necessary bound for the Gaussian
estimator, which depend more specifically on properties of the matrix . As
such they may suggest for what type of matrices one distribution or another
provides a particularly effective or relatively ineffective stochastic
estimation method
Optimal query complexity for estimating the trace of a matrix
Given an implicit matrix with oracle access for any
, we study the query complexity of randomized algorithms for
estimating the trace of the matrix. This problem has many applications in
quantum physics, machine learning, and pattern matching. Two metrics are
commonly used for evaluating the estimators: i) variance; ii) a high
probability multiplicative-approximation guarantee. Almost all the known
estimators are of the form for being i.i.d. for some special distribution.
Our main results are summarized as follows. We give an exact characterization
of the minimum variance unbiased estimator in the broad class of linear
nonadaptive estimators (which subsumes all the existing known estimators). We
also consider the query complexity lower bounds for any (possibly nonlinear and
adaptive) estimators: (1) We show that any estimator requires
queries to have a guarantee of variance at most
. (2) We show that any estimator requires
queries to achieve a
-multiplicative approximation guarantee with probability at
least . Both above lower bounds are asymptotically tight.
As a corollary, we also resolve a conjecture in the seminal work of Avron and
Toledo (Journal of the ACM 2011) regarding the sample complexity of the
Gaussian Estimator.Comment: full version of the paper in ICALP 201
- …