6,195 research outputs found
Adaptive Low-Complexity Sequential Inference for Dirichlet Process Mixture Models
We develop a sequential low-complexity inference procedure for Dirichlet
process mixtures of Gaussians for online clustering and parameter estimation
when the number of clusters are unknown a-priori. We present an easily
computable, closed form parametric expression for the conditional likelihood,
in which hyperparameters are recursively updated as a function of the streaming
data assuming conjugate priors. Motivated by large-sample asymptotics, we
propose a novel adaptive low-complexity design for the Dirichlet process
concentration parameter and show that the number of classes grow at most at a
logarithmic rate. We further prove that in the large-sample limit, the
conditional likelihood and data predictive distribution become asymptotically
Gaussian. We demonstrate through experiments on synthetic and real data sets
that our approach is superior to other online state-of-the-art methods.Comment: 25 pages, To appear in Advances in Neural Information Processing
Systems (NIPS) 201
Modulation Classification for MIMO-OFDM Signals via Approximate Bayesian Inference
The problem of modulation classification for a multiple-antenna (MIMO) system
employing orthogonal frequency division multiplexing (OFDM) is investigated
under the assumption of unknown frequency-selective fading channels and
signal-to-noise ratio (SNR). The classification problem is formulated as a
Bayesian inference task, and solutions are proposed based on Gibbs sampling and
mean field variational inference. The proposed methods rely on a selection of
the prior distributions that adopts a latent Dirichlet model for the modulation
type and on the Bayesian network formalism. The Gibbs sampling method converges
to the optimal Bayesian solution and, using numerical results, its accuracy is
seen to improve for small sample sizes when switching to the mean field
variational inference technique after a number of iterations. The speed of
convergence is shown to improve via annealing and random restarts. While most
of the literature on modulation classification assume that the channels are
flat fading, that the number of receive antennas is no less than that of
transmit antennas, and that a large number of observed data symbols are
available, the proposed methods perform well under more general conditions.
Finally, the proposed Bayesian methods are demonstrated to improve over
existing non-Bayesian approaches based on independent component analysis and on
prior Bayesian methods based on the `superconstellation' method.Comment: To be appear in IEEE Trans. Veh. Technolog
Estimating Discrete Markov Models From Various Incomplete Data Schemes
The parameters of a discrete stationary Markov model are transition
probabilities between states. Traditionally, data consist in sequences of
observed states for a given number of individuals over the whole observation
period. In such a case, the estimation of transition probabilities is
straightforwardly made by counting one-step moves from a given state to
another. In many real-life problems, however, the inference is much more
difficult as state sequences are not fully observed, namely the state of each
individual is known only for some given values of the time variable. A review
of the problem is given, focusing on Monte Carlo Markov Chain (MCMC) algorithms
to perform Bayesian inference and evaluate posterior distributions of the
transition probabilities in this missing-data framework. Leaning on the
dependence between the rows of the transition matrix, an adaptive MCMC
mechanism accelerating the classical Metropolis-Hastings algorithm is then
proposed and empirically studied.Comment: 26 pages - preprint accepted in 20th February 2012 for publication in
Computational Statistics and Data Analysis (please cite the journal's paper
- …