45,784 research outputs found
On Weighted Low-Rank Approximation
Our main interest is the low-rank approximation of a matrix in R^m.n under a
weighted Frobenius norm. This norm associates a weight to each of the (m x n)
matrix entries. We conjecture that the number of approximations is at most
min(m, n).
We also investigate how the approximations depend on the weight-values.Comment: 13 page
Far-Field Compression for Fast Kernel Summation Methods in High Dimensions
We consider fast kernel summations in high dimensions: given a large set of
points in dimensions (with ) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.Comment: 43 pages, 21 figure
Optimal low-rank approximations of Bayesian linear inverse problems
In the Bayesian approach to inverse problems, data are often informative,
relative to the prior, only on a low-dimensional subspace of the parameter
space. Significant computational savings can be achieved by using this subspace
to characterize and approximate the posterior distribution of the parameters.
We first investigate approximation of the posterior covariance matrix as a
low-rank update of the prior covariance matrix. We prove optimality of a
particular update, based on the leading eigendirections of the matrix pencil
defined by the Hessian of the negative log-likelihood and the prior precision,
for a broad class of loss functions. This class includes the F\"{o}rstner
metric for symmetric positive definite matrices, as well as the
Kullback-Leibler divergence and the Hellinger distance between the associated
distributions. We also propose two fast approximations of the posterior mean
and prove their optimality with respect to a weighted Bayes risk under
squared-error loss. These approximations are deployed in an offline-online
manner, where a more costly but data-independent offline calculation is
followed by fast online evaluations. As a result, these approximations are
particularly useful when repeated posterior mean evaluations are required for
multiple data sets. We demonstrate our theoretical results with several
numerical examples, including high-dimensional X-ray tomography and an inverse
heat conduction problem. In both of these examples, the intrinsic
low-dimensional structure of the inference problem can be exploited while
producing results that are essentially indistinguishable from solutions
computed in the full space
A tensor approximation method based on ideal minimal residual formulations for the solution of high-dimensional problems
In this paper, we propose a method for the approximation of the solution of
high-dimensional weakly coercive problems formulated in tensor spaces using
low-rank approximation formats. The method can be seen as a perturbation of a
minimal residual method with residual norm corresponding to the error in a
specified solution norm. We introduce and analyze an iterative algorithm that
is able to provide a controlled approximation of the optimal approximation of
the solution in a given low-rank subset, without any a priori information on
this solution. We also introduce a weak greedy algorithm which uses this
perturbed minimal residual method for the computation of successive greedy
corrections in small tensor subsets. We prove its convergence under some
conditions on the parameters of the algorithm. The residual norm can be
designed such that the resulting low-rank approximations are quasi-optimal with
respect to particular norms of interest, thus yielding to goal-oriented order
reduction strategies for the approximation of high-dimensional problems. The
proposed numerical method is applied to the solution of a stochastic partial
differential equation which is discretized using standard Galerkin methods in
tensor product spaces
Frequency-Weighted Model Reduction with Applications to Structured Models
In this paper, a frequency-weighted extension of a
recently proposed model reduction method for linear systems
is presented. The method uses convex optimization and can be
used both with sample data and exact models. We also obtain
bounds on the frequency-weighted error. The method is combined
with a rank-minimization heuristic to approximate multiinputâ
multi-output systems.We also present two applicationsâ
environment compensation and simplification of interconnected
models â where we argue the proposed methods are useful
Tighter Low-rank Approximation via Sampling the Leveraged Element
In this work, we propose a new randomized algorithm for computing a low-rank
approximation to a given matrix. Taking an approach different from existing
literature, our method first involves a specific biased sampling, with an
element being chosen based on the leverage scores of its row and column, and
then involves weighted alternating minimization over the factored form of the
intended low-rank matrix, to minimize error only on these samples. Our method
can leverage input sparsity, yet produce approximations in {\em spectral} (as
opposed to the weaker Frobenius) norm; this combines the best aspects of
otherwise disparate current results, but with a dependence on the condition
number . In particular we require computations to generate a rank-
approximation to in spectral norm. In contrast, the best existing method
requires time to compute an approximation
in Frobenius norm. Besides the tightness in spectral norm, we have a better
dependence on the error . Our method is naturally and highly
parallelizable.
Our new approach enables two extensions that are interesting on their own.
The first is a new method to directly compute a low-rank approximation (in
efficient factored form) to the product of two given matrices; it computes a
small random set of entries of the product, and then executes weighted
alternating minimization (as before) on these. The sampling strategy is
different because now we cannot access leverage scores of the product matrix
(but instead have to work with input matrices). The second extension is an
improved algorithm with smaller communication complexity for the distributed
PCA setting (where each server has small set of rows of the matrix, and want to
compute low rank approximation with small amount of communication with other
servers).Comment: 36 pages, 3 figures, Extended abstract to appear in the proceedings
of ACM-SIAM Symposium on Discrete Algorithms (SODA15
- âŠ