31 research outputs found
A Bayesian Filtering Algorithm for Gaussian Mixture Models
A Bayesian filtering algorithm is developed for a class of state-space
systems that can be modelled via Gaussian mixtures. In general, the exact
solution to this filtering problem involves an exponential growth in the number
of mixture terms and this is handled here by utilising a Gaussian mixture
reduction step after both the time and measurement updates. In addition, a
square-root implementation of the unified algorithm is presented and this
algorithm is profiled on several simulated systems. This includes the state
estimation for two non-linear systems that are strictly outside the class
considered in this paper
Incremental Learning of Nonparametric Bayesian Mixture Models
Clustering is a fundamental task in many vision applications.
To date, most clustering algorithms work in a
batch setting and training examples must be gathered in a
large group before learning can begin. Here we explore
incremental clustering, in which data can arrive continuously.
We present a novel incremental model-based clustering
algorithm based on nonparametric Bayesian methods,
which we call Memory Bounded Variational Dirichlet
Process (MB-VDP). The number of clusters are determined
flexibly by the data and the approach can be used to automatically
discover object categories. The computational requirements
required to produce model updates are bounded
and do not grow with the amount of data processed. The
technique is well suited to very large datasets, and we show
that our approach outperforms existing online alternatives
for learning nonparametric Bayesian mixture models
A Tutorial on Bayesian Nonparametric Models
A key problem in statistical modeling is model selection, how to choose a
model at an appropriate level of complexity. This problem appears in many
settings, most prominently in choosing the number ofclusters in mixture models
or the number of factors in factor analysis. In this tutorial we describe
Bayesian nonparametric methods, a class of methods that side-steps this issue
by allowing the data to determine the complexity of the model. This tutorial is
a high-level introduction to Bayesian nonparametric methods and contains
several examples of their application.Comment: 28 pages, 8 figure
How Many Communities Are There?
Stochastic blockmodels and variants thereof are among the most widely used
approaches to community detection for social networks and relational data. A
stochastic blockmodel partitions the nodes of a network into disjoint sets,
called communities. The approach is inherently related to clustering with
mixture models; and raises a similar model selection problem for the number of
communities. The Bayesian information criterion (BIC) is a popular solution,
however, for stochastic blockmodels, the conditional independence assumption
given the communities of the endpoints among different edges is usually
violated in practice. In this regard, we propose composite likelihood BIC
(CL-BIC) to select the number of communities, and we show it is robust against
possible misspecifications in the underlying stochastic blockmodel assumptions.
We derive the requisite methodology and illustrate the approach using both
simulated and real data. Supplementary materials containing the relevant
computer code are available online.Comment: 26 pages, 3 figure