157 research outputs found

    Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

    Full text link
    Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this article, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis-Hastings (MH) sampling schemes: We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference.Comment: 33 pages, 14 figure

    Refraction-corrected ray-based inversion for three-dimensional ultrasound tomography of the breast

    Get PDF
    Ultrasound Tomography has seen a revival of interest in the past decade, especially for breast imaging, due to improvements in both ultrasound and computing hardware. In particular, three-dimensional ultrasound tomography, a fully tomographic method in which the medium to be imaged is surrounded by ultrasound transducers, has become feasible. In this paper, a comprehensive derivation and study of a robust framework for large-scale bent-ray ultrasound tomography in 3D for a hemispherical detector array is presented. Two ray-tracing approaches are derived and compared. More significantly, the problem of linking the rays between emitters and receivers, which is challenging in 3D due to the high number of degrees of freedom for the trajectory of rays, is analysed both as a minimisation and as a root-finding problem. The ray-linking problem is parameterised for a convex detection surface and three robust, accurate, and efficient ray-linking algorithms are formulated and demonstrated. To stabilise these methods, novel adaptive-smoothing approaches are proposed that control the conditioning of the update matrices to ensure accurate linking. The nonlinear UST problem of estimating the sound speed was recast as a series of linearised subproblems, each solved using the above algorithms and within a steepest descent scheme. The whole imaging algorithm was demonstrated to be robust and accurate on realistic data simulated using a full-wave acoustic model and an anatomical breast phantom, and incorporating the errors due to time-of-flight picking that would be present with measured data. This method can used to provide a low-artefact, quantitatively accurate, 3D sound speed maps. In addition to being useful in their own right, such 3D sound speed maps can be used to initialise full-wave inversion methods, or as an input to photoacoustic tomography reconstructions

    Approximate k-space models and Deep Learning for fast photoacoustic reconstruction

    Get PDF
    We present a framework for accelerated iterative reconstructions using a fast and approximate forward model that is based on k-space methods for photoacoustic tomography. The approximate model introduces aliasing artefacts in the gradient information for the iterative reconstruction, but these artefacts are highly structured and we can train a CNN that can use the approximate information to perform an iterative reconstruction. We show feasibility of the method for human in-vivo measurements in a limited-view geometry. The proposed method is able to produce superior results to total variation reconstructions with a speed-up of 32 times

    Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation

    Get PDF
    A crucial limitation of current high-resolution 3D photoacoustic tomography (PAT) devices that employ sequential scanning is their long acquisition time. In previous work, we demonstrated how to use compressed sensing techniques to improve upon this: images with good spatial resolution and contrast can be obtained from suitably sub-sampled PAT data acquired by novel acoustic scanning systems if sparsity-constrained image reconstruction techniques such as total variation regularization are used. Now, we show how a further increase of image quality can be achieved for imaging dynamic processes in living tissue (4D PAT). The key idea is to exploit the additional temporal redundancy of the data by coupling the previously used spatial image reconstruction models with sparsity-constrained motion estimation models. While simulated data from a two-dimensional numerical phantom will be used to illustrate the main properties of this recently developed joint-image-reconstruction-and-motion-estimation framework, measured data from a dynamic experimental phantom will also be used to demonstrate their potential for challenging, large-scale, real-world, three-dimensional scenarios. The latter only becomes feasible if a carefully designed combination of tailored optimization schemes is employed, which we describe and examine in more detail

    Bayesian Modelling of Skull Conductivity Uncertainties in EEG Source Imaging

    Full text link
    Knowing the correct skull conductivity is crucial for the accuracy of EEG source imaging, but unfortunately, its true value, which is inter- and intra-individually varying, is difficult to determine. In this paper, we propose a statistical method based on the Bayesian approximation error approach to compensate for source imaging errors related to erroneous skull conductivity. We demonstrate the potential of the approach by simulating EEG data of focal source activity and using the dipole scan algorithm and a sparsity promoting prior to reconstruct the underlying sources. The results suggest that the greatest improvements with the proposed method can be achieved when the focal sources are close to the skull.Comment: 4 pages, 2 figures, European Medical and Biological Engineering Conferenc

    Accelerated High-Resolution Photoacoustic Tomography via Compressed Sensing

    Get PDF
    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue. A particular example is the planar Fabry-Perot (FP) scanner, which yields high-resolution images but takes several minutes to sequentially map the photoacoustic field on the sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: First, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP scanner and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in-vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction methods that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of PAT scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.Comment: submitted to "Physics in Medicine and Biology

    Fast Gibbs sampling for high-dimensional Bayesian inversion

    Get PDF
    Solving ill-posed inverse problems by Bayesian inference has recently attracted considerable attention. Compared to deterministic approaches, the probabilistic representation of the solution by the posterior distribution can be exploited to explore and quantify its uncertainties. In applications where the inverse solution is subject to further analysis procedures, this can be a significant advantage. Alongside theoretical progress, various new computational techniques allow to sample very high dimensional posterior distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior sampler was developed for linear inverse problems with 1\ell_1-type priors. In this article, we extend this single component Gibbs-type sampler to a wide range of priors used in Bayesian inversion, such as general pq\ell_p^q priors with additional hard constraints. Besides a fast computation of the conditional, single component densities in an explicit, parameterized form, a fast, robust and exact sampling from these one-dimensional densities is key to obtain an efficient algorithm. We demonstrate that a generalization of slice sampling can utilize their specific structure for this task and illustrate the performance of the resulting slice-within-Gibbs samplers by different computed examples. These new samplers allow us to perform sample-based Bayesian inference in high-dimensional scenarios with certain priors for the first time, including the inversion of computed tomography (CT) data with the popular isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems

    On the Adjoint Operator in Photoacoustic Tomography

    Get PDF
    Photoacoustic Tomography (PAT) is an emerging biomedical "imaging from coupled physics" technique, in which the image contrast is due to optical absorption, but the information is carried to the surface of the tissue as ultrasound pulses. Many algorithms and formulae for PAT image reconstruction have been proposed for the case when a complete data set is available. In many practical imaging scenarios, however, it is not possible to obtain the full data, or the data may be sub-sampled for faster data acquisition. In such cases, image reconstruction algorithms that can incorporate prior knowledge to ameliorate the loss of data are required. Hence, recently there has been an increased interest in using variational image reconstruction. A crucial ingredient for the application of these techniques is the adjoint of the PAT forward operator, which is described in this article from physical, theoretical and numerical perspectives. First, a simple mathematical derivation of the adjoint of the PAT forward operator in the continuous framework is presented. Then, an efficient numerical implementation of the adjoint using a k-space time domain wave propagation model is described and illustrated in the context of variational PAT image reconstruction, on both 2D and 3D examples including inhomogeneous sound speed. The principal advantage of this analytical adjoint over an algebraic adjoint (obtained by taking the direct adjoint of the particular numerical forward scheme used) is that it can be implemented using currently available fast wave propagation solvers.Comment: submitted to "Inverse Problems

    Sparse Bayesian Inference & Uncertainty Quantification for Inverse Imaging Problems

    Get PDF
    During the last two decades, sparsity has emerged as a key concept to solve linear and non-linear ill-posed inverse problems, in particular for severely ill-posed problems and applications with incomplete, sub-sampled data. At the same time, there is a growing demand to obtain quantitative instead of just qualitative inverse results together with a systematic assessment of their uncertainties (Uncertainty quantification, UQ). Bayesian inference seems like a suitable framework to combine sparsity and UQ but its application to large-scale inverse problems resulting from fine discretizations of PDE models leads to severe computational and conceptional challenges. In this talk, we will focus on two different Bayesian approaches to model sparsity as a-priori information: Via convex, but non-smooth prior energies such as total variation and Besov space priors and via non-convex but smooth priors arising from hierarchical Bayesian modeling. To illustrate our findings, we will rely on experimental data from challenging biomedical imaging applications such as EEG/MEG source localization and limited-angle CT. We want to share the experiences, results we obtained and the open questions we face from our perspective as researchers coming from a background in biomedical imaging rather than in statistics and hope to stimulate a fruitful discussion for both sides
    corecore