40 research outputs found

    Sparse Bayesian Inference & Uncertainty Quantification for Inverse Imaging Problems

    Get PDF
    During the last two decades, sparsity has emerged as a key concept to solve linear and non-linear ill-posed inverse problems, in particular for severely ill-posed problems and applications with incomplete, sub-sampled data. At the same time, there is a growing demand to obtain quantitative instead of just qualitative inverse results together with a systematic assessment of their uncertainties (Uncertainty quantification, UQ). Bayesian inference seems like a suitable framework to combine sparsity and UQ but its application to large-scale inverse problems resulting from fine discretizations of PDE models leads to severe computational and conceptional challenges. In this talk, we will focus on two different Bayesian approaches to model sparsity as a-priori information: Via convex, but non-smooth prior energies such as total variation and Besov space priors and via non-convex but smooth priors arising from hierarchical Bayesian modeling. To illustrate our findings, we will rely on experimental data from challenging biomedical imaging applications such as EEG/MEG source localization and limited-angle CT. We want to share the experiences, results we obtained and the open questions we face from our perspective as researchers coming from a background in biomedical imaging rather than in statistics and hope to stimulate a fruitful discussion for both sides

    Bayesian inversion in biomedical imaging

    Full text link
    Biomedizinische Bildgebung ist zu einer Schlüsseltechnik geworden, Struktur oder Funktion lebender Organismen nicht-invasiv zu untersuchen. Relevante Informationen aus den gemessenen Daten zu rekonstruieren erfordert neben mathematischer Modellierung und numerischer Simulation das verlässliche Lösen schlecht gestellter inverser Probleme. Um dies zu erreichen müssen zusätzliche a-priori Informationen über die zu rekonstruierende Größe formuliert und in die algorithmischen Lösungsverfahren einbezogen werden. Bayesianische Invertierung ist eine spezielle mathematische Methodik dies zu tun. Die vorliegende Arbeit entwickelt eine aktuelle Übersicht Bayesianischer Invertierung und demonstriert die vorgestellten Konzepte und Algorithmen in verschiedenen numerischen Studien, darunter anspruchsvolle Anwendungen aus der biomedizinischen Bildgebung mit experimentellen Daten. Ein Schwerpunkt liegt dabei auf der Verwendung von Dünnbesetztheit/Sparsity als a-priori Information.Biomedical imaging techniques became a key technology to assess the structure or function of living organisms in a non-invasive way. Besides innovations in the instrumentation, the development of new and improved methods for processing and analysis of the measured data has become a vital field of research. Building on traditional signal processing, this area nowadays also comprises mathematical modeling, numerical simulation and inverse problems. The latter describes the reconstruction of quantities of interest from measured data and a given generative model. Unfortunately, most inverse problems are ill-posed, which means that a robust and reliable reconstruction is not possible unless additional a-priori information on the quantity of interest is incorporated into the solution method. Bayesian inversion is a mathematical methodology to formulate and employ a-priori information in computational schemes to solve the inverse problem. This thesis develops a recent overview on Bayesian inversion and exemplifies the presented concepts and algorithms in various numerical studies including challenging biomedical imaging applications with experimental data. A particular focus is on using sparsity as a-priori information within the Bayesian framework. <br

    Fast Gibbs sampling for high-dimensional Bayesian inversion

    Get PDF
    Solving ill-posed inverse problems by Bayesian inference has recently attracted considerable attention. Compared to deterministic approaches, the probabilistic representation of the solution by the posterior distribution can be exploited to explore and quantify its uncertainties. In applications where the inverse solution is subject to further analysis procedures, this can be a significant advantage. Alongside theoretical progress, various new computational techniques allow to sample very high dimensional posterior distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior sampler was developed for linear inverse problems with 1\ell_1-type priors. In this article, we extend this single component Gibbs-type sampler to a wide range of priors used in Bayesian inversion, such as general pq\ell_p^q priors with additional hard constraints. Besides a fast computation of the conditional, single component densities in an explicit, parameterized form, a fast, robust and exact sampling from these one-dimensional densities is key to obtain an efficient algorithm. We demonstrate that a generalization of slice sampling can utilize their specific structure for this task and illustrate the performance of the resulting slice-within-Gibbs samplers by different computed examples. These new samplers allow us to perform sample-based Bayesian inference in high-dimensional scenarios with certain priors for the first time, including the inversion of computed tomography (CT) data with the popular isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems

    Never look back - A modified EnKF method and its application to the training of neural networks without back propagation

    Get PDF
    In this work, we present a new derivative-free optimization method and investigate its use for training neural networks. Our method is motivated by the Ensemble Kalman Filter (EnKF), which has been used successfully for solving optimization problems that involve large-scale, highly nonlinear dynamical systems. A key benefit of the EnKF method is that it requires only the evaluation of the forward propagation but not its derivatives. Hence, in the context of neural networks, it alleviates the need for back propagation and reduces the memory consumption dramatically. However, the method is not a pure "black-box" global optimization heuristic as it efficiently utilizes the structure of typical learning problems. Promising first results of the EnKF for training deep neural networks have been presented recently by Kovachki and Stuart. We propose an important modification of the EnKF that enables us to prove convergence of our method to the minimizer of a strongly convex function. Our method also bears similarity with implicit filtering and we demonstrate its potential for minimizing highly oscillatory functions using a simple example. Further, we provide numerical examples that demonstrate the potential of our method for training deep neural networks

    Equivalent-source acoustic holography for projecting measured ultrasound fields through complex media

    Get PDF
    Holographic projections of experimental ultrasound measurements generally use the angular spectrum method or Rayleigh integral, where the measured data is imposed as a Dirichlet boundary condition. In contrast, full-wave models, which can account for more complex wave behaviour, often use interior mass or velocity sources to introduce acoustic energy into the simulation. Here, a method to generate an equivalent interior source that reproduces the measurement data is proposed based on gradient-based optimisation. The equivalent-source can then be used with full-wave models (for example, the open-source k-Wave toolbox) to compute holographic projections through complex media including nonlinearity and heterogeneous material properties. Numerical and experimental results using both time-domain and continuous-wave sources are used to demonstrate the accuracy of the approach

    A hierarchical Bayesian perspective on majorization-minimization for non-convex sparse regression: Application to M/EEG source imaging

    Get PDF
    Majorization-minimization (MM) is a standard iterative optimization technique which consists in minimizing a sequence of convex surrogate functionals. MM approaches have been particularly successful to tackle inverse problems and statistical machine learning problems where the regularization term is a sparsity-promoting concave function. However, due to non-convexity, the solution found by MM depends on its initialization. Uniform initialization is the most natural and often employed strategy as it boils down to penalizing all coefficients equally in the first MM iteration. Yet, this arbitrary choice can lead to unsatisfactory results in severely under-determined inverse problems such as source imaging with magneto- and electro-encephalography (M/EEG). The framework of hierarchical Bayesian modeling (HBM) is an alternative approach to encode sparsity. This work shows that for certain hierarchical models, a simple alternating scheme to compute fully Bayesian maximum a posteriori (MAP) estimates leads to the exact same sequence of updates as a standard MM strategy (see the adaptive lasso). With this parallel outlined, we show how to improve upon these MM techniques by probing the multimodal posterior density using Markov Chain Monte-Carlo (MCMC) techniques. Firstly, we show that these samples can provide well-informed initializations that help MM schemes to reach better local minima. Secondly, we demonstrate how it can reveal the different modes of the posterior distribution in order to explore and quantify the inherent uncertainty and ambiguity of such ill-posed inference procedure. In the context of M/EEG, each mode corresponds to a plausible configuration of neural sources, which is crucial for data interpretation, especially in clinical contexts. Results on both simulations and real datasets show how the number or the type of sensors affect the uncertainties on the estimates

    Improved EEG source localization with Bayesian uncertainty modelling of unknown skull conductivity

    Get PDF
    Electroencephalography (EEG) source imaging is an ill-posed inverse problem that requires accurate conductivity modelling of the head tissues, especially the skull. Unfortunately, the conductivity values are difficult to determine in vivo. In this paper, we show that the exact knowledge of the skull conductivity is not always necessary when the Bayesian approximation error (BAE) approach is exploited. In BAE, we first postulate a probability

    Refraction-corrected ray-based inversion for three-dimensional ultrasound tomography of the breast

    Get PDF
    Ultrasound tomography (UST) has seen a revival of interest in the past decade, especially for breast imaging, due to improvements in both ultrasound and computing hardware. In particular, three-dimensional UST, a fully tomographic method in which the medium to be imaged is surrounded by ultrasound transducers, has become feasible. This has led to renewed attention on UST image reconstruction algorithms. In this paper, a comprehensive derivation and study of a robust framework for large-scale bent-ray UST in 3D for a hemispherical detector array is presented. Two ray

    Just-in-time deep learning for real-time X-ray computed tomography

    Get PDF
    Real-time X-ray tomography pipelines, such as implemented by RECAST3D, compute and visualize tomographic reconstructions in milliseconds, and enable the observation of dynamic experiments in synchrotron beamlines and laboratory scanners. For extending real-time reconstruction by image processing and analysis components, Deep Neural Networks (DNNs) are a promising technology, due to their strong performance and much faster run-times compared to conventional algorithms. DNNs may prevent experiment repetition by simplifying real-time steering and optimization of the ongoing experiment. The main challenge of integrating DNNs into real-time tomography pipelines, however, is that they need to learn their task from representative data before the start of the experiment. In scientific environments, such training data may not exist, and other uncertain and variable factors, such as the set-up configuration, reconstruction parameters, or user interaction, cannot easily be anticipated beforehand, either. To overcome these problems, we developed just-in-time learning, an online DNN training strategy that takes advantage of the spatio-temporal continuity of consecutive reconstructions in the tomographic pipeline. This allows training and deploying comparatively small DNNs during the experiment. We provide software implementations, and study the feasibility and challenges of the approach by training the self-supervised Noise2Inverse denoising task with X-ray data replayed from real-world dynamic experiments

    Effects of awareness and task relevance on neurocomputational models of mismatch negativity generation

    Get PDF
    Detection of regularities and their violations in sensory input is key to perception. Violations are indexed by an early EEG component called the mismatch negativity (MMN) – even if participants are distracted or unaware of the stimuli. On a mechanistic level, two dominant models have been suggested to contribute to the MMN: adaptation and prediction. Whether and how context conditions, such as awareness and task relevance, modulate the mechanisms of MMN generation is unknown. We conducted an EEG study disentangling influences of task relevance and awareness on the visual MMN. Then, we estimated different computational models for the generation of single-trial amplitudes in the MMN time window. Amplitudes were best explained by a prediction error model when stimuli were task-relevant but by an adaptation model when task-irrelevant and unaware. Thus, mismatch generation does not rely on one predominant mechanism but mechanisms vary with task relevance of s
    corecore