1,289 research outputs found

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    RIBBONS: Rapid Inpainting Based on Browsing of Neighborhood Statistics

    Full text link
    Image inpainting refers to filling missing places in images using neighboring pixels. It also has many applications in different tasks of image processing. Most of these applications enhance the image quality by significant unwanted changes or even elimination of some existing pixels. These changes require considerable computational complexities which in turn results in remarkable processing time. In this paper we propose a fast inpainting algorithm called RIBBONS based on selection of patches around each missing pixel. This would accelerate the execution speed and the capability of online frame inpainting in video. The applied cost-function is a combination of statistical and spatial features in all neighboring pixels. We evaluate some candidate patches using the proposed cost function and minimize it to achieve the final patch. Experimental results show the higher speed of 'Ribbons' in comparison with previous methods while being comparable in terms of PSNR and SSIM for the images in MISC dataset

    Benchmarking the Robustness of Semantic Segmentation Models

    Full text link
    When designing a semantic segmentation module for a practical application, such as autonomous driving, it is crucial to understand the robustness of the module with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on the state-of-the-art model DeepLabv3+. To increase the realism of our study, we utilize almost 400,000 images generated from Cityscapes, PASCAL VOC 2012, and ADE20K. Based on the benchmark study, we gain several new insights. Firstly, contrary to full-image classification, model robustness increases with model performance, in most cases. Secondly, some architecture properties affect robustness significantly, such as a Dense Prediction Cell, which was designed to maximize performance on clean data only.Comment: CVPR 2020 camera read

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    A Comprehensive Review of Image Restoration and Noise Reduction Techniques

    Get PDF
    Images play a crucial role in modern life and find applications in diverse fields, ranging from preserving memories to conducting scientific research. However, images often suffer from various forms of degradation such as blur, noise, and contrast loss. These degradations make images difficult to interpret, reduce their visual quality, and limit their practical applications. To overcome these challenges, image restoration and noise reduction techniques have been developed to recover degraded images and enhance their quality. These techniques have gained significant importance in recent years, especially with the increasing use of digital imaging in various fields such as medical imaging, surveillance, satellite imaging, and many others. This paper presents a comprehensive review of image restoration and noise reduction techniques, encompassing spatial and frequency domain methods, and deep learning-based techniques. The paper also discusses the evaluation metrics utilized to assess the effectiveness of these techniques and explores future research directions in this field. The primary objective of this paper is to offer a comprehensive understanding of the concepts and methods involved in image restoration and noise reduction
    • …
    corecore