296 research outputs found

    A higher-order MRF based variational model for multiplicative noise reduction

    Get PDF
    The Fields of Experts (FoE) image prior model, a filter-based higher-order Markov Random Fields (MRF) model, has been shown to be effective for many image restoration problems. Motivated by the successes of FoE-based approaches, in this letter, we propose a novel variational model for multiplicative noise reduction based on the FoE image prior model. The resulted model corresponds to a non-convex minimization problem, which can be solved by a recently published non-convex optimization algorithm. Experimental results based on synthetic speckle noise and real synthetic aperture radar (SAR) images suggest that the performance of our proposed method is on par with the best published despeckling algorithm. Besides, our proposed model comes along with an additional advantage, that the inference is extremely efficient. {Our GPU based implementation takes less than 1s to produce state-of-the-art despeckling performance.}Comment: 5 pages, 5 figures, to appear in IEEE Signal Processing Letter

    Speckle noise removal convex method using higher-order curvature variation

    Get PDF

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Removing multiplicative noise by Douglas-Rachford splitting methods

    Full text link
    Multiplicative noise appears in various image processing applications, e.g., in synthetic aperture radar (SAR), ultrasound imaging or in connection with blur in electronic microscopy, single particle emission computed tomography (SPECT) and positron emission tomography (PET). In this paper, we consider a variational restoration model consisting of the I-divergence as data fitting term and the total variation semi-norm or nonlocal means as regularizer. Although the I-divergence is the typical data fitting term when dealing with Poisson noise we substantiate why it is also appropriate for cleaning Gamma noise. We propose to compute the minimizer of our restoration functional by applying Douglas-Rachford splitting techniques, resp. alternating split Bregman methods, combined with an efficient algorithm to solve the involved nonlinear systems of equations. We prove the Q-linear convergence of the latter algorithm. Finally, we demonstrate the performance of our whole scheme by numerical examples. It appears that the nonlocal means approach leads to very good qualitative results

    Low-Complexity Detection/Equalization in Large-Dimension MIMO-ISI Channels Using Graphical Models

    Full text link
    In this paper, we deal with low-complexity near-optimal detection/equalization in large-dimension multiple-input multiple-output inter-symbol interference (MIMO-ISI) channels using message passing on graphical models. A key contribution in the paper is the demonstration that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, although the graphical models that represent MIMO-ISI channels are fully/densely connected (loopy graphs). These include 1) use of Markov Random Field (MRF) based graphical model with pairwise interaction, in conjunction with {\em message/belief damping}, and 2) use of Factor Graph (FG) based graphical model with {\em Gaussian approximation of interference} (GAI). The per-symbol complexities are O(K2nt2)O(K^2n_t^2) and O(Knt)O(Kn_t) for the MRF and the FG with GAI approaches, respectively, where KK and ntn_t denote the number of channel uses per frame, and number of transmit antennas, respectively. These low-complexities are quite attractive for large dimensions, i.e., for large KntKn_t. From a performance perspective, these algorithms are even more interesting in large-dimensions since they achieve increasingly closer to optimum detection performance for increasing KntKn_t. Also, we show that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of MM-QAM symbol detection

    Noise Estimation, Noise Reduction and Intensity Inhomogeneity Correction in MRI Images of the Brain

    Get PDF
    Rician noise and intensity inhomogeneity are two common types of image degradation that manifest in the acquisition of magnetic resonance imaging (MRI) system images of the brain. Many noise reduction and intensity inhomogeneity correction algorithms are based on strong parametric assumptions. These parametric assumptions are generic and do not account for salient features that are unique to specific classes and different levels of degradation in natural images. This thesis proposes the 4-neighborhood clique system in a layer-structured Markov random field (MRF) model for noise estimation and noise reduction. When the test image is the only physical system under consideration, it is regarded as a single layer Markov random field (SLMRF) model, and as a double layer MRF model when the test images and classical priors are considered. A scientific principle states that segmentation trivializes the task of bias field correction. Another principle states that the bias field distorts the intensity but not the spatial attribute of an image. This thesis exploits these two widely acknowledged scientific principles in order to propose a new model for correction of intensity inhomogeneity. The noise estimation algorithm is invariant to the presence or absence of background features in an image and more accurate in the estimation of noise levels because it is potentially immune to the modeling errors inherent in some current state-of-the-art algorithms. The noise reduction algorithm derived from the SLMRF model does not incorporate a regularization parameter. Furthermore, it preserves edges, and its output is devoid of the blurring and ringing artifacts associated with Gaussian and wavelet based algorithms. The procedure for correction of intensity inhomogeneity does not require the computationally intensive task of estimation of the bias field map. Furthermore, there is no requirement for a digital brain atlas which will incorporate additional image processing tasks such as image registration

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions
    corecore