57,860 research outputs found
On optimal low-rank approximation of non-negative matrices
For low-rank Frobenius-norm approximations of matrices with non-negative entries, it is shown that the Lagrange dual is computable by semi-definite programming. Under certain assumptions the duality gap is zero. Even when the duality gap is non-zero, several new insights are provided
Optimal low-rank approximations of Bayesian linear inverse problems
In the Bayesian approach to inverse problems, data are often informative,
relative to the prior, only on a low-dimensional subspace of the parameter
space. Significant computational savings can be achieved by using this subspace
to characterize and approximate the posterior distribution of the parameters.
We first investigate approximation of the posterior covariance matrix as a
low-rank update of the prior covariance matrix. We prove optimality of a
particular update, based on the leading eigendirections of the matrix pencil
defined by the Hessian of the negative log-likelihood and the prior precision,
for a broad class of loss functions. This class includes the F\"{o}rstner
metric for symmetric positive definite matrices, as well as the
Kullback-Leibler divergence and the Hellinger distance between the associated
distributions. We also propose two fast approximations of the posterior mean
and prove their optimality with respect to a weighted Bayes risk under
squared-error loss. These approximations are deployed in an offline-online
manner, where a more costly but data-independent offline calculation is
followed by fast online evaluations. As a result, these approximations are
particularly useful when repeated posterior mean evaluations are required for
multiple data sets. We demonstrate our theoretical results with several
numerical examples, including high-dimensional X-ray tomography and an inverse
heat conduction problem. In both of these examples, the intrinsic
low-dimensional structure of the inference problem can be exploited while
producing results that are essentially indistinguishable from solutions
computed in the full space
OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage
The truncated singular value decomposition (SVD) of the measurement matrix is
the optimal solution to the_representation_ problem of how to best approximate
a noisy measurement matrix using a low-rank matrix. Here, we consider the
(unobservable)_denoising_ problem of how to best approximate a low-rank signal
matrix buried in noise by optimal (re)weighting of the singular vectors of the
measurement matrix. We exploit recent results from random matrix theory to
exactly characterize the large matrix limit of the optimal weighting
coefficients and show that they can be computed directly from data for a large
class of noise models that includes the i.i.d. Gaussian noise case.
Our analysis brings into sharp focus the shrinkage-and-thresholding form of
the optimal weights, the non-convex nature of the associated shrinkage function
(on the singular values) and explains why matrix regularization via singular
value thresholding with convex penalty functions (such as the nuclear norm)
will always be suboptimal. We validate our theoretical predictions with
numerical simulations, develop an implementable algorithm (OptShrink) that
realizes the predicted performance gains and show how our methods can be used
to improve estimation in the setting where the measured matrix has missing
entries.Comment: Published version. The algorithm can be downloaded from
http://www.eecs.umich.edu/~rajnrao/optshrin
- …