13,219 research outputs found
Hierarchical Bayesian sparse image reconstruction with application to MRFM
This paper presents a hierarchical Bayesian model to reconstruct sparse
images when the observations are obtained from linear transformations and
corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is
well suited to such naturally sparse image applications as it seamlessly
accounts for properties such as sparsity and positivity of the image via
appropriate Bayes priors. We propose a prior that is based on a weighted
mixture of a positive exponential distribution and a mass at zero. The prior
has hyperparameters that are tuned automatically by marginalization over the
hierarchical Bayesian model. To overcome the complexity of the posterior
distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be
used to estimate the image to be recovered, e.g. by maximizing the estimated
posterior distribution. In our fully Bayesian approach the posteriors of all
the parameters are available. Thus our algorithm provides more information than
other previously proposed sparse reconstruction methods that only give a point
estimate. The performance of our hierarchical Bayesian sparse reconstruction
method is illustrated on synthetic and real data collected from a tobacco virus
sample using a prototype MRFM instrument.Comment: v2: final version; IEEE Trans. Image Processing, 200
Collaborative Hierarchical Sparse Modeling
Sparse modeling is a powerful framework for data analysis and processing.
Traditionally, encoding in this framework is done by solving an l_1-regularized
linear regression problem, usually called Lasso. In this work we first combine
the sparsity-inducing property of the Lasso model, at the individual feature
level, with the block-sparsity property of the group Lasso model, where sparse
groups of features are jointly encoded, obtaining a sparsity pattern
hierarchically structured. This results in the hierarchical Lasso, which shows
important practical modeling advantages. We then extend this approach to the
collaborative case, where a set of simultaneously coded signals share the same
sparsity pattern at the higher (group) level but not necessarily at the lower
one. Signals then share the same active groups, or classes, but not necessarily
the same active set. This is very well suited for applications such as source
separation. An efficient optimization procedure, which guarantees convergence
to the global optimum, is developed for these new models. The underlying
presentation of the new framework and optimization approach is complemented
with experimental examples and preliminary theoretical results.Comment: To appear in CISS 201
Bayesian orthogonal component analysis for sparse representation
This paper addresses the problem of identifying a lower dimensional space
where observed data can be sparsely represented. This under-complete dictionary
learning task can be formulated as a blind separation problem of sparse sources
linearly mixed with an unknown orthogonal mixing matrix. This issue is
formulated in a Bayesian framework. First, the unknown sparse sources are
modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted
mixture of an atom at zero and a Gaussian distribution is proposed as prior
distribution for the unobserved sources. A non-informative prior distribution
defined on an appropriate Stiefel manifold is elected for the mixing matrix.
The Bayesian inference on the unknown parameters is conducted using a Markov
chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is
designed to generate samples asymptotically distributed according to the joint
posterior distribution of the unknown model parameters and hyperparameters.
These samples are then used to approximate the joint maximum a posteriori
estimator of the sources and mixing matrix. Simulations conducted on synthetic
data are reported to illustrate the performance of the method for recovering
sparse representations. An application to sparse coding on under-complete
dictionary is finally investigated.Comment: Revised version. Accepted to IEEE Trans. Signal Processin
Structured Sparse Modelling with Hierarchical GP
In this paper a new Bayesian model for sparse linear regression with a
spatio-temporal structure is proposed. It incorporates the structural
assumptions based on a hierarchical Gaussian process prior for spike and slab
coefficients. We design an inference algorithm based on Expectation Propagation
and evaluate the model over the real data.Comment: SPARS 201
Variational semi-blind sparse deconvolution with orthogonal kernel bases and its application to MRFM
We present a variational Bayesian method of joint image reconstruction and point spread function (PSF) estimation when the PSF of the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework, using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic measure that corresponds to image sparsity. Importantly, the proposed Bayesian deconvolution algorithm does not require hand tuning. Simulation results clearly demonstrate that the semi-blind deconvolution algorithm compares favorably with previous Markov chain Monte Carlo (MCMC) version of myopic sparse reconstruction. It significantly outperforms mismatched non-blind algorithms that rely on the assumption of the perfect knowledge of the PSF. The algorithm is illustrated on real data from magnetic resonance force microscopy (MRFM)
Variational semi-blind sparse deconvolution with orthogonal kernel bases and its application to MRFM
We present a variational Bayesian method of joint image reconstruction and point spread function (PSF) estimation when the PSF of the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework, using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic measure that corresponds to image sparsity. Importantly, the proposed Bayesian deconvolution algorithm does not require hand tuning. Simulation results clearly demonstrate that the semi-blind deconvolution algorithm compares favorably with previous Markov chain Monte Carlo (MCMC) version of myopic sparse reconstruction. It significantly outperforms mismatched non-blind algorithms that rely on the assumption of the perfect knowledge of the PSF. The algorithm is illustrated on real data from magnetic resonance force microscopy (MRFM)
Dependent Nonparametric Bayesian Group Dictionary Learning for online reconstruction of Dynamic MR images
In this paper, we introduce a dictionary learning based approach applied to
the problem of real-time reconstruction of MR image sequences that are highly
undersampled in k-space. Unlike traditional dictionary learning, our method
integrates both global and patch-wise (local) sparsity information and
incorporates some priori information into the reconstruction process. Moreover,
we use a Dependent Hierarchical Beta-process as the prior for the group-based
dictionary learning, which adaptively infers the dictionary size and the
sparsity of each patch; and also ensures that similar patches are manifested in
terms of similar dictionary atoms. An efficient numerical algorithm based on
the alternating direction method of multipliers (ADMM) is also presented.
Through extensive experimental results we show that our proposed method
achieves superior reconstruction quality, compared to the other state-of-the-
art DL-based methods
- âŠ