36 research outputs found

    Bayesian Inference for Inverse Problems

    Get PDF
    Inverse problems arise everywhere we have indirect measurement. Regularization and Bayesian inference methods are two main approaches to handle inverse problems. Bayesian inference approach is more general and has much more tools for developing efficient methods for difficult problems. In this chapter, first, an overview of the Bayesian parameter estimation is presented, then we see the extension for inverse problems. The main difficulty is the great dimension of unknown quantity and the appropriate choice of the prior law. The second main difficulty is the computational aspects. Different approximate Bayesian computations and in particular the variational Bayesian approximation (VBA) methods are explained in details

    Belief Propagation Reconstruction for Discrete Tomography

    Get PDF
    International audienceWe consider the reconstruction of a two-dimensional discrete image from a set of tomographic measurements corresponding to the Radon projection. Assuming that the image has a structure where neighbouring pixels have a larger probability to take the same value, we follow a Bayesian approach and introduce a fast message-passing reconstruction algorithm based on belief propagation. For numerical results, we specialize to the case of binary tomography. We test the algorithm on binary synthetic images with different length scales and compare our results against a more usual convex optimization approach. We investigate the reconstruction error as a function of the number of tomographic measurements, corresponding to the number of projection angles. The belief propagation algorithm turns out to be more efficient than the convex-optimization algorithm, both in terms of recovery bounds for noise-free projections, and in terms of reconstruction quality when moderate Gaussian noise is added to the projections

    Inverse problems in medical ultrasound images - applications to image deconvolution, segmentation and super-resolution

    Get PDF
    In the field of medical image analysis, ultrasound is a core imaging modality employed due to its real time and easy-to-use nature, its non-ionizing and low cost characteristics. Ultrasound imaging is used in numerous clinical applications, such as fetus monitoring, diagnosis of cardiac diseases, flow estimation, etc. Classical applications in ultrasound imaging involve tissue characterization, tissue motion estimation or image quality enhancement (contrast, resolution, signal to noise ratio). However, one of the major problems with ultrasound images, is the presence of noise, having the form of a granular pattern, called speckle. The speckle noise in ultrasound images leads to the relative poor image qualities compared with other medical image modalities, which limits the applications of medical ultrasound imaging. In order to better understand and analyze ultrasound images, several device-based techniques have been developed during last 20 years. The object of this PhD thesis is to propose new image processing methods allowing us to improve ultrasound image quality using postprocessing techniques. First, we propose a Bayesian method for joint deconvolution and segmentation of ultrasound images based on their tight relationship. The problem is formulated as an inverse problem that is solved within a Bayesian framework. Due to the intractability of the posterior distribution associated with the proposed Bayesian model, we investigate a Markov chain Monte Carlo (MCMC) technique which generates samples distributed according to the posterior and use these samples to build estimators of the ultrasound image. In a second step, we propose a fast single image super-resolution framework using a new analytical solution to the l2-l2 problems (i.e., 2\ell_2-norm regularized quadratic problems), which is applicable for both medical ultrasound images and piecewise/ natural images. In a third step, blind deconvolution of ultrasound images is studied by considering the following two strategies: i) A Gaussian prior for the PSF is proposed in a Bayesian framework. ii) An alternating optimization method is explored for blind deconvolution of ultrasound

    Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification

    Get PDF
    Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide

    Advanced regularization and discretization methods in diffuse optical tomography

    Get PDF
    Diffuse optical tomography (DOT) is an emerging technique that utilizes light in the near infrared spectral region (650−900nm) to measure the optical properties of physiological tissue. Comparing with other imaging modalities, DOT modality is non-invasive and non-ionising. Because of the relatively lower absorption of haemoglobin, water and lipid at the near infrared spectral region, the light is able to propagate several centimeters inside of the tissue without being absolutely absorbed. The transmitted near infrared light is then combined with the image reconstruction algorithm to recover the clinical relevant information inside of the tissue. Image reconstruction in DOT is a critical problem. The accuracy and precision of diffuse optical imaging rely on the accuracy of image reconstruction. Therefore, it is of great importance to design efficient and effective algorithms for image reconstruction. Image reconstruction has two processes. The process of modelling light propagation in tissues is called the forward problem. A large number of models can be used to predict light propagation within tissues, including stochastic, analytical and numerical models. The process of recovering optical parameters inside of the tissue using the transmitted measurements is called the inverse problem. In this thesis, a number of advanced regularization and discretization methods in diffuse optical tomography are proposed and evaluated on simulated and real experimental data in reconstruction accuracy and efficiency. In DOT, the number of measurements is significantly fewer than the number of optical parameters to be recovered. Therefore the inverse problem is an ill-posed problem which would suffer from the local minimum trap. Regularization methods are necessary to alleviate the ill-posedness and help to constrain the inverse problem to achieve a plausible solution. In order to alleviate the over-smoothing effect of the popular used Tikhonov regularization, L1-norm regularization based nonlinear DOT reconstruction for spectrally constrained diffuse optical tomography is proposed. This proposed regularization can reduce crosstalk between chromophores and scatter parameters and maintain image contrast by inducing sparsity. This work investigates multiple algorithms to find the most computational efficient one for solving the proposed regularization methods. In order to recover non-sparse images where multiple activations or complex injuries happen in the brain, a more general total variation regularization is introduced. The proposed total variation is shown to be able to alleviate the over-smoothing effect of Tikhonov regularization and localize the anomaly by inducing sparsity of the gradient of the solution. A new numerical method called graph-based numerical method is introduced to model unstructured geometries of DOT objects. The new numerical method (discretization method) is compared with the widely used finite element-based (FEM) numerical method and it turns out that the graph-based numerical method is more stable and robust to changes in mesh resolution. With the advantages discovered on the graph-based numerical method, graph-based numerical method is further applied to model the light propagation inside of the tissue. In this work, two measurement systems are considered: continuous wave (CW) and frequency domain (FD). New formulations of the forward model for CW/FD DOT are proposed and the concepts of differential operators are defined under the nonlocal vector calculus. Extensive numerical experiments on simulated and realistic experimental data validated that the proposed forward models are able to accurately model the light propagation in the medium and are quantitatively comparable with both analytical and FEM forward models. In addition, it is more computational efficient and allows identical implementation for geometries in any dimension

    Pulmonary Image Segmentation and Registration Algorithms: Towards Regional Evaluation of Obstructive Lung Disease

    Get PDF
    Pulmonary imaging, including pulmonary magnetic resonance imaging (MRI) and computed tomography (CT), provides a way to sensitively and regionally measure spatially heterogeneous lung structural-functional abnormalities. These unique imaging biomarkers offer the potential for better understanding pulmonary disease mechanisms, monitoring disease progression and response to therapy, and developing novel treatments for improved patient care. To generate these regional lung structure-function measurements and enable broad clinical applications of quantitative pulmonary MRI and CT biomarkers, as a first step, accurate, reproducible and rapid lung segmentation and registration methods are required. In this regard, we first developed a 1H MRI lung segmentation algorithm that employs complementary hyperpolarized 3He MRI functional information for improved lung segmentation. The 1H-3He MRI joint segmentation algorithm was formulated as a coupled continuous min-cut model and solved through convex relaxation, for which a dual coupled continuous max-flow model was proposed and a max-flow-based efficient numerical solver was developed. Experimental results on a clinical dataset of 25 chronic obstructive pulmonary disease (COPD) patients ranging in disease severity demonstrated that the algorithm provided rapid lung segmentation with high accuracy, reproducibility and diminished user interaction. We then developed a general 1H MRI left-right lung segmentation approach by exploring the left-to-right lung volume proportion prior. The challenging volume proportion-constrained multi-region segmentation problem was approximated through convex relaxation and equivalently represented by a max-flow model with bounded flow conservation conditions. This gave rise to a multiplier-based high performance numerical implementation based on convex optimization theories. In 20 patients with mild- to-moderate and severe asthma, the approach demonstrated high agreement with manual segmentation, excellent reproducibility and computational efficiency. Finally, we developed a CT-3He MRI deformable registration approach that coupled the complementary CT-1H MRI registration. The joint registration problem was solved by exploring optical-flow techniques, primal-dual analyses and convex optimization theories. In a diverse group of patients with asthma and COPD, the registration approach demonstrated lower target registration error than single registration and provided fast regional lung structure-function measurements that were strongly correlated with a reference method. Collectively, these lung segmentation and registration algorithms demonstrated accuracy, reproducibility and workflow efficiency that all may be clinically-acceptable. All of this is consistent with the need for broad and large-scale clinical applications of pulmonary MRI and CT

    Variational and learning models for image and time series inverse problems

    Get PDF
    Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
    corecore