6,389 research outputs found
Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm
This paper addresses the problem of estimating the Potts parameter B jointly
with the unknown parameters of a Bayesian model within a Markov chain Monte
Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem
because performing inference on B requires computing the intractable
normalizing constant of the Potts model. In the proposed MCMC method the
estimation of B is conducted using a likelihood-free Metropolis-Hastings
algorithm. Experimental results obtained for synthetic data show that
estimating B jointly with the other unknown parameters leads to estimation
results that are as good as those obtained with the actual value of B. On the
other hand, assuming that the value of B is known can degrade estimation
performance significantly if this value is incorrect. To illustrate the
interest of this method, the proposed algorithm is successfully applied to real
bidimensional SAR and tridimensional ultrasound images
Bayesian computation for statistical models with intractable normalizing constants
This paper deals with some computational aspects in the Bayesian analysis of
statistical models with intractable normalizing constants. In the presence of
intractable normalizing constants in the likelihood function, traditional MCMC
methods cannot be applied. We propose an approach to sample from such posterior
distributions. The method can be thought as a Bayesian version of the MCMC-MLE
approach of Geyer and Thompson (1992). To the best of our knowledge, this is
the first general and asymptotically consistent Monte Carlo method for such
problems. We illustrate the method with examples from image segmentation and
social network modeling. We study as well the asymptotic behavior of the
algorithm and obtain a strong law of large numbers for empirical averages.Comment: 20 pages, 4 figures, submitted for publicatio
Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging
Many analyses of neuroimaging data involve studying one or more regions of
interest (ROIs) in a brain image. In order to do so, each ROI must first be
identified. Since every brain is unique, the location, size, and shape of each
ROI varies across subjects. Thus, each ROI in a brain image must either be
manually identified or (semi-) automatically delineated, a task referred to as
segmentation. Automatic segmentation often involves mapping a previously
manually segmented image to a new brain image and propagating the labels to
obtain an estimate of where each ROI is located in the new image. A more recent
approach to this problem is to propagate labels from multiple manually
segmented atlases and combine the results using a process known as label
fusion. To date, most label fusion algorithms either employ voting procedures
or impose prior structure and subsequently find the maximum a posteriori
estimator (i.e., the posterior mode) through optimization. We propose using a
fully Bayesian spatial regression model for label fusion that facilitates
direct incorporation of covariate information while making accessible the
entire posterior distribution. We discuss the implementation of our model via
Markov chain Monte Carlo and illustrate the procedure through both simulation
and application to segmentation of the hippocampus, an anatomical structure
known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
Exact Bayesian curve fitting and signal segmentation.
We consider regression models where the underlying functional relationship between the response and the explanatory variable is modeled as independent linear regressions on disjoint segments. We present an algorithm for perfect simulation from the posterior distribution of such a model, even allowing for an unknown number of segments and an unknown model order for the linear regressions within each segment. The algorithm is simple, can scale well to large data sets, and avoids the problem of diagnosing convergence that is present with Monte Carlo Markov Chain (MCMC) approaches to this problem. We demonstrate our algorithm on standard denoising problems, on a piecewise constant AR model, and on a speech segmentation problem
Labor Market Entry and Earnings Dynamics: Bayesian Inference Using Mixtures-of-Experts Markov Chain Clustering
This paper analyzes patterns in the earnings development of young labor market entrants over their life cycle. We identify four distinctly different types of transition patterns between discrete earnings states in a large administrative data set. Further, we investigate the effects of labor market conditions at the time of entry on the probability of belonging to each transition type. To estimate our statistical model we use a model-based clustering approach. The statistical challenge in our application comes from the di±culty in extending distance-based clustering approaches to the problem of identify groups of similar time series in a panel of discrete-valued time series. We use Markov chain clustering, proposed by Pamminger and FrĂŒhwirth-Schnatter (2010), which is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to analyze group membership we present an extension to this approach by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule using a multinomial logit model.Labor Market Entry Conditions, Transition Data, Markov Chain Monte Carlo, Multinomial Logit, Panel Data, Auxiliary Mixture Sampler, Bayesian Statistics
Multiple Testing for Neuroimaging via Hidden Markov Random Field
Traditional voxel-level multiple testing procedures in neuroimaging, mostly
-value based, often ignore the spatial correlations among neighboring voxels
and thus suffer from substantial loss of power. We extend the
local-significance-index based procedure originally developed for the hidden
Markov chain models, which aims to minimize the false nondiscovery rate subject
to a constraint on the false discovery rate, to three-dimensional neuroimaging
data using a hidden Markov random field model. A generalized
expectation-maximization algorithm for maximizing the penalized likelihood is
proposed for estimating the model parameters. Extensive simulations show that
the proposed approach is more powerful than conventional false discovery rate
procedures. We apply the method to the comparison between mild cognitive
impairment, a disease status with increased risk of developing Alzheimer's or
another dementia, and normal controls in the FDG-PET imaging study of the
Alzheimer's Disease Neuroimaging Initiative.Comment: A MATLAB package implementing the proposed FDR procedure is available
with this paper at the Biometrics website on Wiley Online Librar
- âŠ