562 research outputs found
Rethinking LDA: moment matching for discrete ICA
We consider moment matching techniques for estimation in Latent Dirichlet
Allocation (LDA). By drawing explicit links between LDA and discrete versions
of independent component analysis (ICA), we first derive a new set of
cumulant-based tensors, with an improved sample complexity. Moreover, we reuse
standard ICA techniques such as joint diagonalization of tensors to improve
over existing methods based on the tensor power method. In an extensive set of
experiments on both synthetic and real datasets, we show that our new
combination of tensors and orthogonal joint diagonalization techniques
outperforms existing moment matching methods.Comment: 30 pages; added plate diagrams and clarifications, changed style,
corrected typos, updated figures. in Proceedings of the 29-th Conference on
Neural Information Processing Systems (NIPS), 201
Covariance and PCA for Categorical Variables
Covariances from categorical variables are defined using a regular simplex
expression for categories. The method follows the variance definition by Gini,
and it gives the covariance as a solution of simultaneous equations. The
calculated results give reasonable values for test data. A method of principal
component analysis (RS-PCA) is also proposed using regular simplex expressions,
which allows easy interpretation of the principal components. The proposed
methods apply to variable selection problem of categorical data USCensus1990
data. The proposed methods give appropriate criterion for the variable
selection problem of categoricalComment: 12 pages, 5 figure
Memory-Efficient Topic Modeling
As one of the simplest probabilistic topic modeling techniques, latent
Dirichlet allocation (LDA) has found many important applications in text
mining, computer vision and computational biology. Recent training algorithms
for LDA can be interpreted within a unified message passing framework. However,
message passing requires storing previous messages with a large amount of
memory space, increasing linearly with the number of documents or the number of
topics. Therefore, the high memory usage is often a major problem for topic
modeling of massive corpora containing a large number of topics. To reduce the
space complexity, we propose a novel algorithm without storing previous
messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP
relates the message passing algorithms with the non-negative matrix
factorization (NMF) algorithms, which absorb the message updating into the
message passing process, and thus avoid storing previous messages. Experimental
results on four large data sets confirm that TBP performs comparably well or
even better than current state-of-the-art training algorithms for LDA but with
a much less memory consumption. TBP can do topic modeling when massive corpora
cannot fit in the computer memory, for example, extracting thematic topics from
7 GB PUBMED corpora on a common desktop computer with 2GB memory.Comment: 20 pages, 7 figure
A Graphical Model for Fusing Diverse Microbiome Data
This paper develops a Bayesian graphical model for fusing disparate types of
count data. The motivating application is the study of bacterial communities
from diverse high dimensional features, in this case transcripts, collected
from different treatments. In such datasets, there are no explicit
correspondences between the communities and each correspond to different
factors, making data fusion challenging. We introduce a flexible
multinomial-Gaussian generative model for jointly modeling such count data.
This latent variable model jointly characterizes the observed data through a
common multivariate Gaussian latent space that parameterizes the set of
multinomial probabilities of the transcriptome counts. The covariance matrix of
the latent variables induces a covariance matrix of co-dependencies between all
the transcripts, effectively fusing multiple data sources. We present a
computationally scalable variational Expectation-Maximization (EM) algorithm
for inferring the latent variables and the parameters of the model. The
inferred latent variables provide a common dimensionality reduction for
visualizing the data and the inferred parameters provide a predictive posterior
distribution. In addition to simulation studies that demonstrate the
variational EM procedure, we apply our model to a bacterial microbiome dataset
Advanced Probabilistic Models for Clustering and Projection
Probabilistic modeling for data mining and machine learning problems is a fundamental research area. The general approach is to assume a generative model underlying the observed data, and estimate model parameters via likelihood maximization. It has the deep probability theory as the mathematical background, and enjoys a large amount of methods from statistical learning, sampling theory and Bayesian statistics. In this thesis we study several advanced probabilistic models for data clustering and feature projection, which are the two important unsupervised learning problems.
The goal of clustering is to group similar data points together to uncover the data clusters. While numerous methods exist for various clustering tasks, one important question still remains, i.e., how to automatically determine the number of clusters. The first part of the thesis answers this question from a mixture modeling perspective. A finite mixture model is first introduced for clustering, in which each mixture component is assumed to be an exponential family distribution for generality. The model is then extended to an infinite mixture model, and its strong connection to Dirichlet process (DP) is uncovered which is a non-parametric Bayesian framework. A variational Bayesian algorithm called VBDMA is derived from this new insight to learn the number of clusters automatically, and empirical studies on some 2D data sets and an image data set verify the effectiveness of this algorithm.
In feature projection, we are interested in dimensionality reduction and aim to find a low-dimensional feature representation for the data. We first review the well-known principal component analysis (PCA) and its probabilistic interpretation (PPCA), and then generalize PPCA to a novel probabilistic model which is able to handle non-linear projection known as kernel PCA. An expectation-maximization (EM) algorithm is derived for kernel PCA such that it is fast and applicable to large data sets. Then we propose a novel supervised projection method called MORP, which can take the output information into account in a supervised learning context. Empirical studies on various data sets show much better results compared to unsupervised projection and other supervised projection methods. At the end we generalize MORP probabilistically to propose SPPCA for supervised projection, and we can also naturally extend the model to S2PPCA which is a semi-supervised projection method. This allows us to incorporate both the label information and the unlabeled data into the projection process.
In the third part of the thesis, we introduce a unified probabilistic model which can handle data clustering and feature projection jointly. The model can be viewed as a clustering model with projected features, and a projection model with structured documents. A variational Bayesian learning algorithm can be derived, and it turns out to iterate the clustering operations and projection operations until convergence. Superior performance can be obtained for both clustering and projection
- …