95 research outputs found
Generalized Separable Nonnegative Matrix Factorization
Nonnegative matrix factorization (NMF) is a linear dimensionality technique
for nonnegative data with applications such as image analysis, text mining,
audio source separation and hyperspectral unmixing. Given a data matrix and
a factorization rank , NMF looks for a nonnegative matrix with
columns and a nonnegative matrix with rows such that .
NMF is NP-hard to solve in general. However, it can be computed efficiently
under the separability assumption which requires that the basis vectors appear
as data points, that is, that there exists an index set such that
. In this paper, we generalize the separability
assumption: We only require that for each rank-one factor for
, either for some or for
some . We refer to the corresponding problem as generalized separable NMF
(GS-NMF). We discuss some properties of GS-NMF and propose a convex
optimization model which we solve using a fast gradient method. We also propose
a heuristic algorithm inspired by the successive projection algorithm. To
verify the effectiveness of our methods, we compare them with several
state-of-the-art separable NMF algorithms on synthetic, document and image data
sets.Comment: 31 pages, 12 figures, 4 tables. We have added discussions about the
identifiability of the model, we have modified the first synthetic
experiment, we have clarified some aspects of the contributio
A Practical Algorithm for Topic Modeling with Provable Guarantees
Topic models provide a useful method for dimensionality reduction and
exploratory data analysis in large text corpora. Most approaches to topic model
inference have been based on a maximum likelihood objective. Efficient
algorithms exist that approximate this objective, but they have no provable
guarantees. Recently, algorithms have been introduced that provide provable
bounds, but these algorithms are not practical because they are inefficient and
not robust to violations of model assumptions. In this paper we present an
algorithm for topic model inference that is both provable and practical. The
algorithm produces results comparable to the best MCMC implementations while
running orders of magnitude faster.Comment: 26 page
A new SVD approach to optimal topic estimation
In the probabilistic topic models, the quantity of interest---a low-rank
matrix consisting of topic vectors---is hidden in the text corpus matrix,
masked by noise, and Singular Value Decomposition (SVD) is a potentially useful
tool for learning such a matrix. However, different rows and columns of the
matrix are usually in very different scales and the connection between this
matrix and the singular vectors of the text corpus matrix are usually
complicated and hard to spell out, so how to use SVD for learning topic models
faces challenges.
We overcome the challenges by introducing a proper Pre-SVD normalization of
the text corpus matrix and a proper column-wise scaling for the matrix of
interest, and by revealing a surprising Post-SVD low-dimensional {\it simplex}
structure. The simplex structure, together with the Pre-SVD normalization and
column-wise scaling, allows us to conveniently reconstruct the matrix of
interest, and motivates a new SVD-based approach to learning topic models.
We show that under the popular probabilistic topic model \citep{hofmann1999},
our method has a faster rate of convergence than existing methods in a wide
variety of cases. In particular, for cases where documents are long or is
much larger than , our method achieves the optimal rate. At the heart of the
proofs is a tight element-wise bound on singular vectors of a multinomially
distributed data matrix, which do not exist in literature and we have to derive
by ourself.
We have applied our method to two data sets, Associated Process (AP) and
Statistics Literature Abstract (SLA), with encouraging results. In particular,
there is a clear simplex structure associated with the SVD of the data
matrices, which largely validates our discovery.Comment: 73 pages, 8 figures, 6 tables; considered two different VH algorithm,
OVH and GVH, and provided theoretical analysis for each algorithm;
re-organized upper bound theory part; added the subsection of comparing error
rate with other existing methods; provided another improved version of error
analysis through Bernstein inequality for martingale
- …