27,039 research outputs found
Sampling from large matrices: an approach through geometric functional analysis
We study random submatrices of a large matrix A. We show how to approximately
compute A from its random submatrix of the smallest possible size O(r log r)
with a small error in the spectral norm, where r = ||A||_F^2 / ||A||_2^2 is the
numerical rank of A. The numerical rank is always bounded by, and is a stable
relaxation of, the rank of A. This yields an asymptotically optimal guarantee
in an algorithm for computing low-rank approximations of A. We also prove
asymptotically optimal estimates on the spectral norm and the cut-norm of
random submatrices of A. The result for the cut-norm yields a slight
improvement on the best known sample complexity for an approximation algorithm
for MAX-2CSP problems. We use methods of Probability in Banach spaces, in
particular the law of large numbers for operator-valued random variables.Comment: Our initial claim about Max-2-CSP problems is corrected. We put an
exponential failure probability for the algorithm for low-rank
approximations. Proofs are a little more explaine
Fast computation of the matrix exponential for a Toeplitz matrix
The computation of the matrix exponential is a ubiquitous operation in
numerical mathematics, and for a general, unstructured matrix it
can be computed in operations. An interesting problem arises
if the input matrix is a Toeplitz matrix, for example as the result of
discretizing integral equations with a time invariant kernel. In this case it
is not obvious how to take advantage of the Toeplitz structure, as the
exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself.
The main contribution of this work are fast algorithms for the computation of
the Toeplitz matrix exponential. The algorithms have provable quadratic
complexity if the spectrum is real, or sectorial, or more generally, if the
imaginary parts of the rightmost eigenvalues do not vary too much. They may be
efficient even outside these spectral constraints. They are based on the
scaling and squaring framework, and their analysis connects classical results
from rational approximation theory to matrices of low displacement rank. As an
example, the developed methods are applied to Merton's jump-diffusion model for
option pricing
Computing approximate PSD factorizations
We give an algorithm for computing approximate PSD factorizations of
nonnegative matrices. The running time of the algorithm is polynomial in the
dimensions of the input matrix, but exponential in the PSD rank and the
approximation error. The main ingredient is an exact factorization algorithm
when the rows and columns of the factors are constrained to lie in a general
polyhedron. This strictly generalizes nonnegative matrix factorizations which
can be captured by letting this polyhedron to be the nonnegative orthant.Comment: 10 page
Wishart Mechanism for Differentially Private Principal Components Analysis
We propose a new input perturbation mechanism for publishing a covariance
matrix to achieve -differential privacy. Our mechanism uses a
Wishart distribution to generate matrix noise. In particular, We apply this
mechanism to principal component analysis. Our mechanism is able to keep the
positive semi-definiteness of the published covariance matrix. Thus, our
approach gives rise to a general publishing framework for input perturbation of
a symmetric positive semidefinite matrix. Moreover, compared with the classic
Laplace mechanism, our method has better utility guarantee. To the best of our
knowledge, Wishart mechanism is the best input perturbation approach for
-differentially private PCA. We also compare our work with
previous exponential mechanism algorithms in the literature and provide near
optimal bound while having more flexibility and less computational
intractability.Comment: A full version with technical proofs. Accepted to AAAI-1
- …