9,167 research outputs found
Nonparametric Estimation of Multi-View Latent Variable Models
Spectral methods have greatly advanced the estimation of latent variable
models, generating a sequence of novel and efficient algorithms with strong
theoretical guarantees. However, current spectral algorithms are largely
restricted to mixtures of discrete or Gaussian distributions. In this paper, we
propose a kernel method for learning multi-view latent variable models,
allowing each mixture component to be nonparametric. The key idea of the method
is to embed the joint distribution of a multi-view latent variable into a
reproducing kernel Hilbert space, and then the latent parameters are recovered
using a robust tensor power method. We establish that the sample complexity for
the proposed method is quadratic in the number of latent components and is a
low order polynomial in the other relevant parameters. Thus, our non-parametric
tensor approach to learning latent variable models enjoys good sample and
computational efficiencies. Moreover, the non-parametric tensor power method
compares favorably to EM algorithm and other existing spectral algorithms in
our experiments
Tensor decompositions for learning latent variable models
This work considers a computationally and statistically efficient parameter
estimation method for a wide class of latent variable models---including
Gaussian mixture models, hidden Markov models, and latent Dirichlet
allocation---which exploits a certain tensor structure in their low-order
observable moments (typically, of second- and third-order). Specifically,
parameter estimation is reduced to the problem of extracting a certain
(orthogonal) decomposition of a symmetric tensor derived from the moments; this
decomposition can be viewed as a natural generalization of the singular value
decomposition for matrices. Although tensor decompositions are generally
intractable to compute, the decomposition of these specially structured tensors
can be efficiently obtained by a variety of approaches, including power
iterations and maximization approaches (similar to the case of matrices). A
detailed analysis of a robust tensor power method is provided, establishing an
analogue of Wedin's perturbation theorem for the singular vectors of matrices.
This implies a robust and computationally tractable estimation approach for
several popular latent variable models
A sparse decomposition of low rank symmetric positive semi-definite matrices
Suppose that is symmetric positive
semidefinite with rank . Our goal is to decompose into
rank-one matrices where the modes
are required to be as sparse as possible. In contrast to eigen decomposition,
these sparse modes are not required to be orthogonal. Such a problem arises in
random field parametrization where is the covariance function and is
intractable to solve in general. In this paper, we partition the indices from 1
to into several patches and propose to quantify the sparseness of a vector
by the number of patches on which it is nonzero, which is called patch-wise
sparseness. Our aim is to find the decomposition which minimizes the total
patch-wise sparseness of the decomposed modes. We propose a
domain-decomposition type method, called intrinsic sparse mode decomposition
(ISMD), which follows the "local-modes-construction + patching-up" procedure.
The key step in the ISMD is to construct local pieces of the intrinsic sparse
modes by a joint diagonalization problem. Thereafter a pivoted Cholesky
decomposition is utilized to glue these local pieces together. Optimal sparse
decomposition, consistency with different domain decomposition and robustness
to small perturbation are proved under the so called regular-sparse assumption
(see Definition 1.2). We provide simulation results to show the efficiency and
robustness of the ISMD. We also compare the ISMD to other existing methods,
e.g., eigen decomposition, pivoted Cholesky decomposition and convex relaxation
of sparse principal component analysis [25] and [40]
Fast and accurate con-eigenvalue algorithm for optimal rational approximations
The need to compute small con-eigenvalues and the associated con-eigenvectors
of positive-definite Cauchy matrices naturally arises when constructing
rational approximations with a (near) optimally small error.
Specifically, given a rational function with poles in the unit disk, a
rational approximation with poles in the unit disk may be obtained
from the th con-eigenvector of an Cauchy matrix, where the
associated con-eigenvalue gives the approximation error in the
norm. Unfortunately, standard algorithms do not accurately compute
small con-eigenvalues (and the associated con-eigenvectors) and, in particular,
yield few or no correct digits for con-eigenvalues smaller than the machine
roundoff. We develop a fast and accurate algorithm for computing
con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices,
yielding even the tiniest con-eigenvalues with high relative accuracy. The
algorithm computes the th con-eigenvalue in operations
and, since the con-eigenvalues of positive-definite Cauchy matrices decay
exponentially fast, we obtain (near) optimal rational approximations in
operations, where is the
approximation error in the norm. We derive error bounds
demonstrating high relative accuracy of the computed con-eigenvalues and the
high accuracy of the unit con-eigenvectors. We also provide examples of using
the algorithm to compute (near) optimal rational approximations of functions
with singularities and sharp transitions, where approximation errors close to
machine precision are obtained. Finally, we present numerical tests on random
(complex-valued) Cauchy matrices to show that the algorithm computes all the
con-eigenvalues and con-eigenvectors with nearly full precision
- β¦