41 research outputs found

    Undecimated haar thresholding for poisson intensity estimation

    Full text link

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Medical image denoising using convolutional denoising autoencoders

    Full text link
    Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.Comment: To appear: 6 pages, paper to be published at the Fourth Workshop on Data Mining in Biomedical Informatics and Healthcare at ICDM, 201

    Multiresolution image models and estimation techniques

    Get PDF

    Image Denoising in Mixed Poisson-Gaussian Noise

    Get PDF
    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy

    Bilateral filter in image processing

    Get PDF
    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges. It has shown to be an effective image denoising technique. It also can be applied to the blocking artifacts reduction. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. Another research interest of bilateral filter is acceleration of the computation speed. There are three main contributions of this thesis. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising. I propose an extension of the bilateral filter: multi resolution bilateral filter, where bilateral filtering is applied to the low-frequency sub-bands of a signal decomposed using a wavelet filter bank. The multi resolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. The second contribution is that I present a spatially adaptive method to reduce compression artifacts. To avoid over-smoothing texture regions and to effectively eliminate blocking and ringing artifacts, in this paper, texture regions and block boundary discontinuities are first detected; these are then used to control/adapt the spatial and intensity parameters of the bilateral filter. The test results prove that the adaptive method can improve the quality of restored images significantly better than the standard bilateral filter. The third contribution is the improvement of the fast bilateral filter, in which I use a combination of multi windows to approximate the Gaussian filter more precisely

    Scale-aware decomposition of images based on patch-based filtering

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 2. 조남익.This dissertation presents an image decomposition algorithm based on patch-based filtering, for splitting an image into a structure layer and a texture layer. There are many applications through the decomposition because each layer can be processed respectively and appropriate manipulations are accomplished. Generally, structure layer captures coarse structure with large discontinuities and a texture layer contains fine details or proper patterns. The image decomposition is done by edge-preserving smoothing where structure layer can be obtained by applying smoothing filters to an image and then a texture layer by subtracting the filtered image from the original. The main contribution of this dissertation is to design an efficient and effective edge-preserving filter that can be adapted to various scales of images. The advantage of the proposed decomposition scheme is that it is robust to noise and can be extended to a noisy image decomposition, while conventional image decomposition methods cannot be applied to a noisy image decomposition and conventional image denoising methods are not suitable for image decomposition. To be specific, a patch-based framework is proposed in this dissertation, which is efficient in image denoising and it is designed to smooth an image while preserving details and texture. Specifically, given a pixel, the filtering output is computed as the weighted average of neighboring pixels. For computing the weights, a set of similar patches is found at each pixel by considering patch similarities based on mean squared error (MSE) and other constraints. Then, weights between each patch and its similar patches are computed respectively. With the patch weights, all the pixels in a patch are updated at the same time while adapting to the local pixel weight. For better edge-preserving smoothing, the proposed algorithm utilizes two iterations which are performed through the same smoothing filter with different parameters. Also kernel bandwidth and the number of similar patches are tuned for multi-scale image decomposition. The proposed decomposition can be applied to many applications, such as HDR tone mapping, detail enhancement, image denoising, and image coding, etc. In detail enhancement, the proposed smoothing filter is utilized to extract image detail and enhance it. In HDR tone mapping, a typical framework is used where the smoothing operator is replaced by the proposed one to reduce contrast range of a high dynamic range image to display it on low dynamic range devices. For image denoising, a noisy input is decomposed into structure/texture/noise and the noise layer is discarded while the texture layer is restored through the histogram matching. Also a novel coding scheme named as ``structure scalable image coding scheme'' is proposed where structure layer and salient texture layer are encoded for efficient image coding. Experimental results show that the proposed framework works well for image decomposition and it is robust to the presence of noise. Also it is verified that the proposed work can be utilized in many applications. In addition, by adopting the proposed method in decomposition of a noisy image, both image denoising and image enhancement can be achieved in the proposed framework. Furthermore, the proposed image coding method reduces compression artifact and improve the performance of image coding.Abstract i Contents iv List of Figures vi List of Tables xi 1 Introduction 1 1.1 Image decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Image enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Image denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Spatial denoising . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.2 Transformdomain denoising . . . . . . . . . . . . . . . . . . 9 1.3.3 benefits of combined image decomposition and image denoising 9 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Related work 17 2.1 Image decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Laplacian subbands . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Edge-preserving smoothing . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Bilateral filtering . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.2 Nonlocal means filtering . . . . . . . . . . . . . . . . . . . . . 21 3 Scale-aware decomposition of images based on patch-based filtering 23 3.1 Edge-preserving smoothing via patch-based framework . . . . . . . . 23 3.2 Multi-scale image decomposition . . . . . . . . . . . . . . . . . . . . 26 4 Applications 31 4.1 Image enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.1.1 Detail enhancement . . . . . . . . . . . . . . . . . . . . . . . 31 4.1.2 HDR tone mapping . . . . . . . . . . . . . . . . . . . . . . . 36 4.2 Image denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2.1 A noisy image decomposition . . . . . . . . . . . . . . . . . . 40 4.2.2 texture enhancement via histogram preservation . . . . . . . 41 4.2.3 image denoising via subband BLF . . . . . . . . . . . . . . . 44 4.2.4 Experimental results of image denoising . . . . . . . . . . . . 48 4.3 Image coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.3.1 Structure scalable image coding framework . . . . . . . . . . 61 5 Conclusion 73 Bibliography 76Docto

    Denoising of Natural Images Using the Wavelet Transform

    Get PDF
    A new denoising algorithm based on the Haar wavelet transform is proposed. The methodology is based on an algorithm initially developed for image compression using the Tetrolet transform. The Tetrolet transform is an adaptive Haar wavelet transform whose support is tetrominoes, that is, shapes made by connecting four equal sized squares. The proposed algorithm improves denoising performance measured in peak signal-to-noise ratio (PSNR) by 1-2.5 dB over the Haar wavelet transform for images corrupted by additive white Gaussian noise (AWGN) assuming universal hard thresholding. The algorithm is local and works independently on each 4x4 block of the image. It performs equally well when compared with other published Haar wavelet transform-based methods (achieves up to 2 dB better PSNR). The local nature of the algorithm and the simplicity of the Haar wavelet transform computations make the proposed algorithm well suited for efficient hardware implementation
    corecore