118 research outputs found

    Total Variation Restoration of Images Corrupted by Poisson Noise with Iterated Conditional Expectations

    Get PDF
    International audienceInterpreting the celebrated Rudin-Osher-Fatemi (ROF) model in a Bayesian framework has led to interesting new variants for Total Variation image denoising in the last decade. The Posterior Mean variant avoids the so-called staircasing artifact of the ROF model but is computationally very expensive. Another recent variant, called TV-ICE (for Iterated Conditional Expectation), delivers very similar images but uses a much faster fixed-point algorithm. In the present work, we consider the TV-ICE approach in the case of a Poisson noise model. We derive an explicit form of the recursion operator, and show linear convergence of the algorithm, as well as the absence of staircasing effect. We also provide a numerical algorithm that carefully handles precision and numerical overflow issues, and show experiments that illustrate the interest of this Poisson TV-ICE variant

    Bregman Cost for Non-Gaussian Noise

    Get PDF
    One of the tasks of the Bayesian inverse problem is to find a good estimate based on the posterior probability density. The most common point estimators are the conditional mean (CM) and maximum a posteriori (MAP) estimates, which correspond to the mean and the mode of the posterior, respectively. From a theoretical point of view it has been argued that the MAP estimate is only in an asymptotic sense a Bayes estimator for the uniform cost function, while the CM estimate is a Bayes estimator for the means squared cost function. Recently, it has been proven that the MAP estimate is a proper Bayes estimator for the Bregman cost if the image is corrupted by Gaussian noise. In this work we extend this result to other noise models with log-concave likelihood density, by introducing two related Bregman cost functions for which the CM and the MAP estimates are proper Bayes estimators. Moreover, we also prove that the CM estimate outperforms the MAP estimate, when the error is measured in a certain Bregman distance, a result previously unknown also in the case of additive Gaussian noise

    Image reconstruction under non-Gaussian noise

    Get PDF

    Multiresolution image models and estimation techniques

    Get PDF

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Fast and accurate evaluation of a generalized incomplete gamma function

    Get PDF
    We present a computational procedure to evaluate the integral ∫xy sp-1 e-μs ds, for 0 ≤ x 0, which generalizes the lower (x=0) and upper (y=+∞) incomplete gamma functions. To allow for large values of x, y, and p while avoiding under/overflow issues in the standard double precision floating point arithmetic, we use an explicit normalization that is much more efficient than the classical ratio with the complete gamma function. The generalized incomplete gamma function is estimated with continued fractions, integrations by parts, or, when x ≈ y, with the Romberg numerical integration algorithm. We show that the accuracy reached by our algorithm improves a recent state-of-the-art method by two orders of magnitude, and is essentially optimal considering the limitations imposed by the floating point arithmetic. Moreover, the admissible parameter range of our algorithm (0 ≤ p,x,y ≤ 1015) is much larger than competing algorithms and its robustness is assessed through massive usage in an image processing application

    Bayesian edge-detection in image processing

    Get PDF
    Problems associated with the processing and statistical analysis of image data are the subject of much current interest, and many sophisticated techniques for extracting semantic content from degraded or corrupted images have been developed. However, such techniques often require considerable computational resources, and thus are, in certain applications, inappropriate. The detection localised discontinuities, or edges, in the image can be regarded as a pre-processing operation in relation to these sophisticated techniques which, if implemented efficiently and successfully, can provide a means for an exploratory analysis that is useful in two ways. First, such an analysis can be used to obtain quantitative information relating to the underlying structures from which the various regions in the image are derived about which we would generally be a priori ignorant. Secondly, in cases where the inference problem relates to discovery of the unknown location or dimensions of a particular region or object, or where we merely wish to infer the presence or absence of structures having a particular configuration, an accurate edge-detection analysis can circumvent the need for the subsequent sophisticated analysis. Relatively little interest has been focussed on the edge-detection problem within a statistical setting. In this thesis, we formulate the edge-detection problem in a formal statistical framework, and develop a simple and easily implemented technique for the analysis of images derived from two-region single edge scenes. We extend this technique in three ways; first, to allow the analysis of more complicated scenes, secondly, by incorporating spatial considerations, and thirdly, by considering images of various qualitative nature. We also study edge reconstruction and representation given the results obtained from the exploratory analysis, and a cognitive problem relating to the detection of objects modelled by members of a class of simple convex objects. Finally, we study in detail aspects of one of the sophisticated image analysis techniques, and the important general statistical applications of the theory on which it is founded

    Gaussian Mixture Model based Spatial Information Concept for Image Segmentation

    Get PDF
    Segmentation of images has found widespread applications in image recognition systems. Over the last two decades, there has been a growing research interest in model-based technique. In this technique, standard Gaussian mixture model (GMM) is a well-known method for image segmentation. The model assumes a common prior distribution, which independently generates the pixel labels. In addition, the spatial relationship between neighboring pixels is not taken into account of the standard GMM. For this reason, its segmentation result is sensitive to noise. To reduce the sensitivity of the segmented result with respect to noise, Markov Random Field (MRF) models provide a powerful way to account for spatial dependencies between image pixels. However, their main drawback is that they are computationally expensive to implement. Based on these considerations, in the first part of this thesis (Chapter 4), we propose an extension of the standard GMM for image segmentation, which utilizes a novel approach to incorporate the spatial relationships between neighboring pixels into the standard GMM. The proposed model is easy to implement and compared with the existing MRF models, requires lesser number of parameters. We also propose a new method to estimate the model parameters in order to minimize the higher bound on the data negative log-likelihood, based on the gradient method. Experimental results obtained on noisy synthetic and real world grayscale images demonstrate the robustness, accuracy and effectiveness of the proposed model in image segmentation. In the final part of this thesis (Chapter 5), another way to incorporate spatial information between the neighboring pixels into the GMM based on MRF is proposed. In comparison to other mixture models that are complex and computationally expensive, the proposed method is robust and fast to implement. In mixture models based on MRF, the M-step of the EM algorithm cannot be directly applied to the prior distribution for maximization of the log-likelihood with respect to the corresponding parameters. Compared with these models, our proposed method directly applies the EM algorithm to optimize the parameters, which makes it much simpler. Finally, our approach is used to segment many images with excellent results
    • …
    corecore