408 research outputs found
Gibbs Sampling for (Coupled) Infinite Mixture Models in the Stick Breaking Representation
Nonparametric Bayesian approaches to clustering, information retrieval,
language modeling and object recognition have recently shown great promise as a
new paradigm for unsupervised data analysis. Most contributions have focused on
the Dirichlet process mixture models or extensions thereof for which efficient
Gibbs samplers exist. In this paper we explore Gibbs samplers for infinite
complexity mixture models in the stick breaking representation. The advantage
of this representation is improved modeling flexibility. For instance, one can
design the prior distribution over cluster sizes or couple multiple infinite
mixture models (e.g. over time) at the level of their parameters (i.e. the
dependent Dirichlet process model). However, Gibbs samplers for infinite
mixture models (as recently introduced in the statistics literature) seem to
mix poorly over cluster labels. Among others issues, this can have the adverse
effect that labels for the same cluster in coupled mixture models are mixed up.
We introduce additional moves in these samplers to improve mixing over cluster
labels and to bring clusters into correspondence. An application to modeling of
storm trajectories is used to illustrate these ideas.Comment: Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006
Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models
Topic models, and more specifically the class of Latent Dirichlet Allocation
(LDA), are widely used for probabilistic modeling of text. MCMC sampling from
the posterior distribution is typically performed using a collapsed Gibbs
sampler. We propose a parallel sparse partially collapsed Gibbs sampler and
compare its speed and efficiency to state-of-the-art samplers for topic models
on five well-known text corpora of differing sizes and properties. In
particular, we propose and compare two different strategies for sampling the
parameter block with latent topic indicators. The experiments show that the
increase in statistical inefficiency from only partial collapsing is smaller
than commonly assumed, and can be more than compensated by the speedup from
parallelization and sparsity on larger corpora. We also prove that the
partially collapsed samplers scale well with the size of the corpus. The
proposed algorithm is fast, efficient, exact, and can be used in more modeling
situations than the ordinary collapsed sampler.Comment: Accepted for publication in Journal of Computational and Graphical
Statistic
Accelerated Parallel Non-conjugate Sampling for Bayesian Non-parametric Models
Inference of latent feature models in the Bayesian nonparametric setting is
generally difficult, especially in high dimensional settings, because it
usually requires proposing features from some prior distribution. In special
cases, where the integration is tractable, we could sample new feature
assignments according to a predictive likelihood. However, this still may not
be efficient in high dimensions. We present a novel method to accelerate the
mixing of latent variable model inference by proposing feature locations from
the data, as opposed to the prior. First, we introduce our accelerated feature
proposal mechanism that we will show is a valid Bayesian inference algorithm
and next we propose an approximate inference strategy to perform accelerated
inference in parallel. This sampling method is efficient for proper mixing of
the Markov chain Monte Carlo sampler, computationally attractive, and is
theoretically guaranteed to converge to the posterior distribution as its
limiting distribution.Comment: Previously known as "Accelerated Inference for Latent Variable
Models
Scaling Nonparametric Bayesian Inference via Subsample-Annealing
We describe an adaptation of the simulated annealing algorithm to
nonparametric clustering and related probabilistic models. This new algorithm
learns nonparametric latent structure over a growing and constantly churning
subsample of training data, where the portion of data subsampled can be
interpreted as the inverse temperature beta(t) in an annealing schedule. Gibbs
sampling at high temperature (i.e., with a very small subsample) can more
quickly explore sketches of the final latent state by (a) making longer jumps
around latent space (as in block Gibbs) and (b) lowering energy barriers (as in
simulated annealing). We prove subsample annealing speeds up mixing time N^2 ->
N in a simple clustering model and exp(N) -> N in another class of models,
where N is data size. Empirically subsample-annealing outperforms naive Gibbs
sampling in accuracy-per-wallclock time, and can scale to larger datasets and
deeper hierarchical models. We demonstrate improved inference on million-row
subsamples of US Census data and network log data and a 307-row hospital rating
dataset, using a Pitman-Yor generalization of the Cross Categorization model.Comment: To appear in AISTATS 201
Gibbs Max-margin Topic Models with Data Augmentation
Max-margin learning is a powerful approach to building classifiers and
structured output predictors. Recent work on max-margin supervised topic models
has successfully integrated it with Bayesian topic models to discover
discriminative latent semantic structures and make accurate predictions for
unseen testing data. However, the resulting learning problems are usually hard
to solve because of the non-smoothness of the margin loss. Existing approaches
to building max-margin supervised topic models rely on an iterative procedure
to solve multiple latent SVM subproblems with additional mean-field assumptions
on the desired posterior distributions. This paper presents an alternative
approach by defining a new max-margin loss. Namely, we present Gibbs max-margin
supervised topic models, a latent variable Gibbs classifier to discover hidden
topic representations for various tasks, including classification, regression
and multi-task learning. Gibbs max-margin supervised topic models minimize an
expected margin loss, which is an upper bound of the existing margin loss
derived from an expected prediction rule. By introducing augmented variables
and integrating out the Dirichlet variables analytically by conjugacy, we
develop simple Gibbs sampling algorithms with no restricting assumptions and no
need to solve SVM subproblems. Furthermore, each step of the
"augment-and-collapse" Gibbs sampling algorithms has an analytical conditional
distribution, from which samples can be easily drawn. Experimental results
demonstrate significant improvements on time efficiency. The classification
performance is also significantly improved over competitors on binary,
multi-class and multi-label classification tasks.Comment: 35 page
A sticky HDP-HMM with application to speaker diarization
We consider the problem of speaker diarization, the problem of segmenting an
audio recording of a meeting into temporal segments corresponding to individual
speakers. The problem is rendered particularly difficult by the fact that we
are not allowed to assume knowledge of the number of people participating in
the meeting. To address this problem, we take a Bayesian nonparametric approach
to speaker diarization that builds on the hierarchical Dirichlet process hidden
Markov model (HDP-HMM) of Teh et al. [J. Amer. Statist. Assoc. 101 (2006)
1566--1581]. Although the basic HDP-HMM tends to over-segment the audio
data---creating redundant states and rapidly switching among them---we describe
an augmented HDP-HMM that provides effective control over the switching rate.
We also show that this augmentation makes it possible to treat emission
distributions nonparametrically. To scale the resulting architecture to
realistic diarization problems, we develop a sampling algorithm that employs a
truncated approximation of the Dirichlet process to jointly resample the full
state sequence, greatly improving mixing rates. Working with a benchmark NIST
data set, we show that our Bayesian nonparametric architecture yields
state-of-the-art speaker diarization results.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS395 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …