768 research outputs found
Fast Recovery and Approximation of Hidden Cauchy Structure
We derive an algorithm of optimal complexity which determines whether a given
matrix is a Cauchy matrix, and which exactly recovers the Cauchy points
defining a Cauchy matrix from the matrix entries. Moreover, we study how to
approximate a given matrix by a Cauchy matrix with a particular focus on the
recovery of Cauchy points from noisy data. We derive an approximation algorithm
of optimal complexity for this task, and prove approximation bounds. Numerical
examples illustrate our theoretical results
Fast computation of the matrix exponential for a Toeplitz matrix
The computation of the matrix exponential is a ubiquitous operation in
numerical mathematics, and for a general, unstructured matrix it
can be computed in operations. An interesting problem arises
if the input matrix is a Toeplitz matrix, for example as the result of
discretizing integral equations with a time invariant kernel. In this case it
is not obvious how to take advantage of the Toeplitz structure, as the
exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself.
The main contribution of this work are fast algorithms for the computation of
the Toeplitz matrix exponential. The algorithms have provable quadratic
complexity if the spectrum is real, or sectorial, or more generally, if the
imaginary parts of the rightmost eigenvalues do not vary too much. They may be
efficient even outside these spectral constraints. They are based on the
scaling and squaring framework, and their analysis connects classical results
from rational approximation theory to matrices of low displacement rank. As an
example, the developed methods are applied to Merton's jump-diffusion model for
option pricing
A Fast Gradient Method for Nonnegative Sparse Regression with Self Dictionary
A nonnegative matrix factorization (NMF) can be computed efficiently under
the separability assumption, which asserts that all the columns of the given
input data matrix belong to the cone generated by a (small) subset of them. The
provably most robust methods to identify these conic basis columns are based on
nonnegative sparse regression and self dictionaries, and require the solution
of large-scale convex optimization problems. In this paper we study a
particular nonnegative sparse regression model with self dictionary. As opposed
to previously proposed models, this model yields a smooth optimization problem
where the sparsity is enforced through linear constraints. We show that the
Euclidean projection on the polyhedron defined by these constraints can be
computed efficiently, and propose a fast gradient method to solve our model. We
compare our algorithm with several state-of-the-art methods on synthetic data
sets and real-world hyperspectral images
Robust Near-Separable Nonnegative Matrix Factorization Using Linear Optimization
Nonnegative matrix factorization (NMF) has been shown recently to be
tractable under the separability assumption, under which all the columns of the
input data matrix belong to the convex cone generated by only a few of these
columns. Bittorf, Recht, R\'e and Tropp (`Factoring nonnegative matrices with
linear programs', NIPS 2012) proposed a linear programming (LP) model, referred
to as Hottopixx, which is robust under any small perturbation of the input
matrix. However, Hottopixx has two important drawbacks: (i) the input matrix
has to be normalized, and (ii) the factorization rank has to be known in
advance. In this paper, we generalize Hottopixx in order to resolve these two
drawbacks, that is, we propose a new LP model which does not require
normalization and detects the factorization rank automatically. Moreover, the
new LP model is more flexible, significantly more tolerant to noise, and can
easily be adapted to handle outliers and other noise models. Finally, we show
on several synthetic datasets that it outperforms Hottopixx while competing
favorably with two state-of-the-art methods.Comment: 27 page; 4 figures. New Example, new experiment on the Swimmer data
se
- …