73 research outputs found

    Blind deconvolution of sparse pulse sequences under a minimum distance constraint: a partially collapsed Gibbs sampler method

    Get PDF
    For blind deconvolution of an unknown sparse sequence convolved with an unknown pulse, a powerful Bayesian method employs the Gibbs sampler in combination with a Bernoulli–Gaussian prior modeling sparsity. In this paper, we extend this method by introducing a minimum distance constraint for the pulses in the sequence. This is physically relevant in applications including layer detection, medical imaging, seismology, and multipath parameter estimation. We propose a Bayesian method for blind deconvolution that is based on a modified Bernoulli–Gaussian prior including a minimum distance constraint factor. The core of our method is a partially collapsed Gibbs sampler (PCGS) that tolerates and even exploits the strong local dependencies introduced by the minimum distance constraint. Simulation results demonstrate significant performance gains compared to a recently proposed PCGS. The main advantages of the minimum distance constraint are a substantial reduction of computational complexity and of the number of spurious components in the deconvolution result

    Variational semi-blind sparse deconvolution with orthogonal kernel bases and its application to MRFM

    Get PDF
    We present a variational Bayesian method of joint image reconstruction and point spread function (PSF) estimation when the PSF of the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework, using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic measure that corresponds to image sparsity. Importantly, the proposed Bayesian deconvolution algorithm does not require hand tuning. Simulation results clearly demonstrate that the semi-blind deconvolution algorithm compares favorably with previous Markov chain Monte Carlo (MCMC) version of myopic sparse reconstruction. It significantly outperforms mismatched non-blind algorithms that rely on the assumption of the perfect knowledge of the PSF. The algorithm is illustrated on real data from magnetic resonance force microscopy (MRFM)

    Variational semi-blind sparse deconvolution with orthogonal kernel bases and its application to MRFM

    Get PDF
    We present a variational Bayesian method of joint image reconstruction and point spread function (PSF) estimation when the PSF of the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework, using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic measure that corresponds to image sparsity. Importantly, the proposed Bayesian deconvolution algorithm does not require hand tuning. Simulation results clearly demonstrate that the semi-blind deconvolution algorithm compares favorably with previous Markov chain Monte Carlo (MCMC) version of myopic sparse reconstruction. It significantly outperforms mismatched non-blind algorithms that rely on the assumption of the perfect knowledge of the PSF. The algorithm is illustrated on real data from magnetic resonance force microscopy (MRFM)

    Soft Bayesian Pursuit Algorithm for Sparse Representations

    Get PDF
    International audienceThis paper deals with sparse representations within a Bayesian framework. For a Bernoulli-Gaussian model, we here propose a method based on a mean-field approximation to estimate the support of the signal. In numerical tests involving a recovery problem, the resulting algorithm is shown to have good performance over a wide range of sparsity levels, compared to various state-of-the-art algorithms

    Reconstruction, Classification, and Segmentation for Computational Microscopy

    Full text link
    This thesis treats two fundamental problems in computational microscopy: image reconstruction for magnetic resonance force microscopy (MRFM) and image classification for electron backscatter diffraction (EBSD). In MRFM, as in many inverse problems, the true point spread function (PSF) that blurs the image may be only partially known. The image quality may suffer from this possible mismatch when standard image reconstruction techniques are applied. To deal with the mismatch, we develop novel Bayesian sparse reconstruction methods that account for possible errors in the PSF of the microscope and for the inherent sparsity of MRFM images. Two methods are proposed: a stochastic method and a variational method. They both jointly estimate the unknown PSF and unknown image. Our proposed framework for reconstruction has the flexibility to incorporate sparsity inducing priors, thus addressing ill-posedness of this non-convex problem, Markov-Random field priors, and can be extended to other image models. To obtain scalable and tractable solutions, a dimensionality reduction technique is applied to the highly nonlinear PSF space. The experiments clearly demonstrate that the proposed methods have superior performance compared to previous methods. In EBSD we develop novel and robust dictionary-based methods for segmentation and classification of grain and sub-grain structures in polycrystalline materials. Our work is the first in EBSD analysis to use a physics-based forward model, called the dictionary, to use full diffraction patterns, and that efficiently classifies patterns into grains, boundaries, and anomalies. In particular, unlike previous methods, our method incorporates anomaly detection directly into the segmentation process. The proposed approach also permits super-resolution of grain mantle and grain boundary locations. Finally, the proposed dictionary-based segmentation method performs uncertainty quantification, i.e. p-values, for the classified grain interiors and grain boundaries. We demonstrate that the dictionary-based approach is robust to instrument drift and material differences that produce small amounts of dictionary mismatch.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/102296/1/seunpark_1.pd

    Boltzmann machine and mean-field approximation for structured sparse decompositions

    Get PDF
    Accepté à IEEE Trans. On Signal ProcessingTaking advantage of the structures inherent in many sparse decompositions constitutes a promising research axis. In this paper, we address this problem from a Bayesian point of view. We exploit a Boltzmann machine, allowing to take a large variety of structures into account, and focus on the resolution of a marginalized maximum a posteriori problem. To solve this problem, we resort to a mean-field approximation and the variational Bayes Expectation-Maximization" algorithm. This approach results in a soft procedure making no hard decision in the support or the values of the sparse representation. We show that this characteristic leads to an improvement of the performance over state-of-the-art algorithms

    Accelerating Bayesian computation in imaging

    Get PDF
    The dimensionality and ill-posedness often encountered in imaging inverse problems are a challenge for Bayesian computational methods, particularly for state-of-the-art sampling alternatives based on the Euler-Maruyama discretisation of the Langevin diffusion process. In this thesis, we address this difficulty and propose alternatives to accelerate Bayesian computation in imaging inverse problems, focusing on its computational aspects. We introduce, as our first contribution, a highly efficient proximal Markov chain Monte Carlo (MCMC) methodology, based on a state-of-the-art approximation known as the proximal stochastic orthogonal Runge-Kutta-Chebyshev (SK-ROCK) method. It has the advantage of cleverly combining multiple gradient evaluations to significantly speed up convergence, similar to accelerated gradient optimisation techniques. We rigorously demonstrate the acceleration of the Markov chains in the 2-Wasserstein distance for Gaussian models as a function of the condition number Îș. In our second contribution, we propose a more sophisticated MCMC sampler, based on the careful integration of two advanced proximal Langevin MCMC methods, SK-ROCK and split Gibbs sampling (SGS), each of which uses a unique approach to accelerate convergence. More precisely, we show how to integrate the proximal SK-ROCK sampler with the model augmentation and relaxation method used by SGS at the level of the Langevin diffusion process, to speed up Bayesian computation at the expense of asymptotic bias. This leads to a new, faster proximal SK-ROCK sampler that combines the accelerated quality of the original sampler with the computational advantages of augmentation and relaxation. Additionally, we propose the augmented and relaxed model to be considered a generalisation of the target model rather than an approximation that situates relaxation in a bias-variance trade-off. As a result, we can carefully calibrate the amount of relaxation to boost both model accuracy (as determined by model evidence) and sampler convergence speed. To achieve this, we derive an empirical Bayesian method that automatically estimates the appropriate level of relaxation via maximum marginal likelihood estimation. The proposed methodologies are demonstrated in several numerical experiments related to image deblurring, hyperspectral unmixing, tomographic reconstruction and inpainting. Comparisons with Euler-type proximal Monte Carlo approaches confirm that the Markov chains generated with our methods exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean square estimation errors with the same computational budget
    • 

    corecore