16 research outputs found

    Image Restoration Methods for Retinal Images: Denoising and Interpolation

    Get PDF
    Retinal imaging provides an opportunity to detect pathological and natural age-related physiological changes in the interior of the eye. Diagnosis of retinal abnormality requires an image that is sharp, clear and free of noise and artifacts. However, to prevent tissue damage, retinal imaging instruments use low illumination radiation, hence, the signal-to-noise ratio (SNR) is reduced which means the total noise power is increased. Furthermore, noise is inherent in some imaging techniques. For example, in Optical Coherence Tomography (OCT) speckle noise is produced due to the coherence between the unwanted backscattered light. Improving OCT image quality by reducing speckle noise increases the accuracy of analyses and hence the diagnostic sensitivity. However, the challenge is to preserve image features while reducing speckle noise. There is a clear trade-off between image feature preservation and speckle noise reduction in OCT. Averaging multiple OCT images taken from a unique position provides a high SNR image, but it drastically increases the scanning time. In this thesis, we develop a multi-frame image denoising method for Spectral Domain OCT (SD-OCT) images extracted from a very close locations of a SD-OCT volume. The proposed denoising method was tested using two dictionaries: nonlinear (NL) and KSVD-based adaptive dictionary. The NL dictionary was constructed by adding phases, polynomial, exponential and boxcar functions to the conventional Discrete Cosine Transform (DCT) dictionary. The proposed denoising method denoises nearby frames of SD-OCT volume using a sparse representation method and combines them by selecting median intensity pixels from the denoised nearby frames. The result showed that both dictionaries reduced the speckle noise from the OCT images; however, the adaptive dictionary showed slightly better results at the cost of a higher computational complexity. The NL dictionary was also used for fundus and OCT image reconstruction. The performance of the NL dictionary was always better than that of other analytical-based dictionaries, such as DCT and Haar. The adaptive dictionary involves a lengthy dictionary learning process, and therefore cannot be used in real situations. We dealt this problem by utilizing a low-rank approximation. In this approach SD-OCT frames were divided into a group of noisy matrices that consist of non-local similar patches. A noise-free patch matrix was obtained from a noisy patch matrix utilizing a low-rank approximation. The noise-free patches from nearby frames were averaged to enhance the denoising. The denoised image obtained from the proposed approach was better than those obtained by several state-of-the-art methods. The proposed approach was extended to jointly denoise and interpolate SD-OCT image. The results show that joint denoising and interpolation method outperforms several existing state-of-the-art denoising methods plus bicubic interpolation.4 month

    Improving a new sparse-coding algorithm dedicated to SAR images with a coeffcient of variation map

    Get PDF
    In this paper, we propose a sparsity-based despeck-ling approach. The first main contribution of this work is the elaboration of a sparse-coding algorithm adapted to the statistics of SAR images. In fact, in most of the sparse-coding algorithms dedicated to SAR data, a logarithmic transform is applied on the data to turn the speckle modeled by a multiplicative noise into an additive noise, then, a Gaussian prior is used. However, using a more suitable prior for SAR data avoids introducing artifacts, as shown in the obtained results. The second main contribution proposed is to evaluate how computing a map predicting the sparsity degree of each patch could bring an improvement compared to a traditional sparse-coding approach with a low-error rate based stopping criterion

    Nonlinear Adaptive Diffusion Models for Image Denoising

    Full text link
    Most of digital image applications demand on high image quality. Unfortunately, images often are degraded by noise during the formation, transmission, and recording processes. Hence, image denoising is an essential processing step preceding visual and automated analyses. Image denoising methods can reduce image contrast, create block or ring artifacts in the process of denoising. In this dissertation, we develop high performance non-linear diffusion based image denoising methods, capable to preserve edges and maintain high visual quality. This is attained by different approaches: First, a nonlinear diffusion is presented with robust M-estimators as diffusivity functions. Secondly, the knowledge of textons derived from Local Binary Patterns (LBP) which unify divergent statistical and structural models of the region analysis is utilized to adjust the time step of diffusion process. Next, the role of nonlinear diffusion which is adaptive to the local context in the wavelet domain is investigated, and the stationary wavelet context based diffusion (SWCD) is developed for performing the iterative shrinkage. Finally, we develop a locally- and feature-adaptive diffusion (LFAD) method, where each image patch/region is diffused individually, and the diffusivity function is modified to incorporate the Inverse Difference Moment as a local estimate of the gradient. Experiments have been conducted to evaluate the performance of each of the developed method and compare it to the reference group and to the state-of-the-art methods

    Modern GPR Target Recognition Methods

    Full text link
    Traditional GPR target recognition methods include pre-processing the data by removal of noisy signatures, dewowing (high-pass filtering to remove low-frequency noise), filtering, deconvolution, migration (correction of the effect of survey geometry), and can rely on the simulation of GPR responses. The techniques usually suffer from the loss of information, inability to adapt from prior results, and inefficient performance in the presence of strong clutter and noise. To address these challenges, several advanced processing methods have been developed over the past decade to enhance GPR target recognition. In this chapter, we provide an overview of these modern GPR processing techniques. In particular, we focus on the following methods: adaptive receive processing of range profiles depending on the target environment; adoption of learning-based methods so that the radar utilizes the results from prior measurements; application of methods that exploit the fact that the target scene is sparse in some domain or dictionary; application of advanced classification techniques; and convolutional coding which provides succinct and representatives features of the targets. We describe each of these techniques or their combinations through a representative application of landmine detection.Comment: Book chapter, 56 pages, 17 figures, 12 tables. arXiv admin note: substantial text overlap with arXiv:1806.0459
    corecore