197 research outputs found
Reversible jump MCMC for two-state multivariate Poisson mixtures
summary:The problem of identifying the source from observations from a Poisson process can be encountered in fault diagnostics systems based on event counters. The identification of the inner state of the system must be made based on observations of counters which entail only information on the total sum of some events from a dual process which has made a transition from an intact to a broken state at some unknown time. Here we demonstrate the general identifiability of this problem in presence of multiple counters
Model based clustering of multinomial count data
We consider the problem of inferring an unknown number of clusters in
replicated multinomial data. Under a model based clustering point of view, this
task can be treated by estimating finite mixtures of multinomial distributions
with or without covariates. Both Maximum Likelihood (ML) as well as Bayesian
estimation are taken into account. Under a Maximum Likelihood approach, we
provide an Expectation--Maximization (EM) algorithm which exploits a careful
initialization procedure combined with a ridge--stabilized implementation of
the Newton--Raphson method in the M--step. Under a Bayesian setup, a stochastic
gradient Markov chain Monte Carlo (MCMC) algorithm embedded within a prior
parallel tempering scheme is devised. The number of clusters is selected
according to the Integrated Completed Likelihood criterion in the ML approach
and estimating the number of non-empty components in overfitting mixture models
in the Bayesian case. Our method is illustrated in simulated data and applied
to two real datasets. An R package is available at
https://github.com/mqbssppe/multinomialLogitMix.Comment: to appear in ADA
Contributions to MCMC Methods in Constrained Domains with Applications to Neuroimaging
Markov chain Monte Carlo (MCMC) methods form a rich class of computational techniques that help its user ascertain samples from target distributions when direct sampling is not possible or when their closed forms are intractable. Over the years, MCMC methods have been used in innumerable situations due to their flexibility and generalizability, even in situations involving nonlinear and/or highly parametrized models. In this dissertation, two major works relating to MCMC methods are presented.
The first involves the development of a method to identify the number and directions of nerve fibers using diffusion-weighted MRI measurements. For this, the biological problem is first formulated as a model selection and estimation problem. Using the framework of reversible jump MCMC, a novel Bayesian scheme that performs both the above tasks simultaneously using customizable priors and proposal distributions is proposed. The proposed method allows users to set a prior level of spatial separation between the nerve fibers, allowing more crossing paths to be detected when desired or a lower number to potentially only detect robust nerve tracts. Hence, estimation that is specific to a given region of interest within the brain can be performed. In simulated examples, the method has been shown to resolve up to four fibers even in instances of highly noisy data. Comparative analysis with other state-of-the-art methods on in-vivo data showed the method\u27s ability to detect more crossing nerve fibers.
The second work involves the construction of an MCMC algorithm that efficiently performs (Bayesian) sampling of parameters with support constraints. The method works by embedding a transformation called inversion in a sphere within the Metropolis-Hastings sampler. This creates an image of the constrained support that is amenable to sampling using standard proposals such as Gaussian. The proposed strategy is tested on three domains: the standard simplex, a sector of an n-sphere, and hypercubes. In each domain, a comparison is made with existing sampling techniques
Adaptive MCMC with online relabeling
When targeting a distribution that is artificially invariant under some
permutations, Markov chain Monte Carlo (MCMC) algorithms face the
label-switching problem, rendering marginal inference particularly cumbersome.
Such a situation arises, for example, in the Bayesian analysis of finite
mixture models. Adaptive MCMC algorithms such as adaptive Metropolis (AM),
which self-calibrates its proposal distribution using an online estimate of the
covariance matrix of the target, are no exception. To address the
label-switching issue, relabeling algorithms associate a permutation to each
MCMC sample, trying to obtain reasonable marginals. In the case of adaptive
Metropolis (Bernoulli 7 (2001) 223-242), an online relabeling strategy is
required. This paper is devoted to the AMOR algorithm, a provably consistent
variant of AM that can cope with the label-switching problem. The idea is to
nest relabeling steps within the MCMC algorithm based on the estimation of a
single covariance matrix that is used both for adapting the covariance of the
proposal distribution in the Metropolis algorithm step and for online
relabeling. We compare the behavior of AMOR to similar relabeling methods. In
the case of compactly supported target distributions, we prove a strong law of
large numbers for AMOR and its ergodicity. These are the first results on the
consistency of an online relabeling algorithm to our knowledge. The proof
underlines latent relations between relabeling and vector quantization.Comment: Published at http://dx.doi.org/10.3150/13-BEJ578 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
- …