5,599 research outputs found

    New Wavelet Domain Wiener Filter Based Denoising for Poisson Noise Removal in Low-Light Condition Digital Image (OTSU WIE-WATH)

    Get PDF
    Digital imaging was developed as early as 1960s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions. As digital technology became cheaper in later, digital images become very common and can simply captured using camera embedded in smartphone nowadays. Nevertheless, due to the limitation of camera technologies in low cost development, digital images are easily corrupted by various types of noise such as Salt and Pepper noise, Gaussian noise and Poisson noise. For digital image captured in the photon limited low light condition, the effect of image noise especially Poisson noise will be more obvious, degrading the quality of the image. Thus, this study aims to develop new denoising technique for Poisson noise removal in low light condition digital images. This study proposed a method which is referred to the OTSU WIE-WATH Filter which utilizes Otsu Threshold, Wiener Filter and Wavelet Threshold. This filter is designed for low and medium Poisson noise removal. The proposed filter performance is compared with other existing denoising techniques. These filters performances are analyzed with two evaluation methods which are objective method and subjective method. Objective method includes performance analysis in terms of Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE). On the other hand, subjective method used is visual effect inspection. The results show that proposed OTSU WIE-WATH Filter provide better performance than compared denoising techniques in low and medium levels Poisson noise removal while preserving the edges and fine details of noisy images

    OTSUHARA-WATH Filter for Poisson Noise Removal in Low Light Condition Digital Image

    Get PDF
    Nowadays, the digital images are used widely due to the development of sophisticated technologies. The recent device that is very popular among its users related to digital images is smartphone. This is due to nowadays smartphone is embedded with its own camera that can capture digital images. Nevertheless, the digital image is easily exposed to various types of noise, especially the Poisson noise in low light condition. Therefore, this study aims to develop a new denoising technique for Poisson noise removal in low light condition digital images. This study proposes a denoising method named as OTSUHARAWATH Filter, which utilizes the Otsu Threshold, Kuwahara Filter and Wavelet Threshold. The proposed methods performance is evaluated based on the Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE) and visual inspection. The comparison between the proposed methods and the existing denoising methods is also performed. From the results of PSNR, MSE, computational time and visual inspection, it can be proven that the OTSUHARA-WATH Filter is able to reduce and smooth noise, while preserving the edges and fine details of the image at low and medium level of Poisson noise in comparison to the existing methods

    Adaptive Smoothing of Digital Images: The R Package adimpro

    Get PDF
    Digital imaging has become omnipresent in the past years with a bulk of applications ranging from medical imaging to photography. When pushing the limits of resolution and sensitivity noise has ever been a major issue. However, commonly used non-adaptive filters can do noise reduction at the cost of a reduced effective spatial resolution only. Here we present a new package adimpro for R, which implements the propagationseparation approach by (Polzehl and Spokoiny 2006) for smoothing digital images. This method naturally adapts to different structures of different size in the image and thus avoids oversmoothing edges and fine structures. We extend the method for imaging data with spatial correlation. Furthermore we show how the estimation of the dependence between variance and mean value can be included. We illustrate the use of the package through some examples.

    Model for Estimation of Bounds in Digital Coding of Seabed Images

    Get PDF
    This paper proposes the novel model for estimation of bounds in digital coding of images. Entropy coding of images is exploited to measure the useful information content of the data. The bit rate achieved by reversible compression using the rate-distortion theory approach takes into account the contribution of the observation noise and the intrinsic information of hypothetical noise-free image. Assuming the Laplacian probability density function of the quantizer input signal, SQNR gains are calculated for image predictive coding system with non-adaptive quantizer for white and correlated noise, respectively. The proposed model is evaluated on seabed images. However, model presented in this paper can be applied to any signal with Laplacian distribution

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure
    corecore