1,303 research outputs found

    A Survey of Non-Linear Filtering Techniques For Image Noise Removal

    Get PDF
    Image is captured or noninheritable by any image capturing device like camera or scanner and then it is stored in the mass storage of the computer system. In many of these applications the existence of impulsive noise among the noninheritable pictures is one altogether common problems. This noise is characterized by spots on the image and is usually related to the innate image because of errors in image sensors and information transmission. Now-a-days there are numerous strategies that are offered to remove noise from digital images. Most of the novel methodology includes 2 stages: the primary stage is to find the noise within the image and the second stage is to eliminate the noise from the image. This paper explores the varied novel methods for the removal of noise from the digital images. The distinctive feature of the all the described filters is that offers well line, edge and detail preservation performance while, at the constant time, effectively removing noise from the input image. In later section, we present a short introduction for various strategies for noise reduction in digital images

    Sorted Min-Max-Mean Filter for Removal of High Density Impulse Noise

    Get PDF
    This paper presents an improved Sorted-Min-Max-Mean Filter (SM3F) algorithm for detection and removal of impulse noise from highly corrupted image. This method uses a single algorithm for detection and removal of impulse noise. Identification of the corrupted pixels is performed by local extrema intensity in grayscale range and these corrupted pixels are removed from the image by applying SM3F operation. The uncorrupted pixels retain its value while corrupted pixel’s value will be changed by the mean value of noise-free pixels present within the selected window. Different images have been used to test the proposed method and it has been found better outcomes in terms of both quantitative measures and visual perception. For quantitative study of algorithm performance, Mean Square Error (MSE), Peak-Signal-to-Noise Ratio (PSNR) and image enhancement factor (IEF) have been used. Experimental observations show that the presented technique effectively removes high density impulse noise and also keeps the originality of pixel’s value. The performance of proposed filter is tested by varying noise density from 10% to 90% and it is observed that for impulse noise having 90% noise density, the maximum PSNR value of 30.03 dB has been achieved indicating better performance of the SM3F algorithm even at 90% noise level. The proposed filter is simple and can be used for grayscale as well as color images for image restoration

    Recursive trimmed filter in eliminating high density impulse noise from digital image

    Get PDF
    Advances in technology have made it easier to share media over the Internet. In the process of media sharing, a media may receive noise or interference that results in loss of information. In this paper, a new method to remove Salt and Pepper noise from images based on recursive method will be presented. The first stage is to recognize the noise from the damaged image, the damaged pixels will be replaced by the mean of the surrounding window, the difference with other methods is the use of recursive approach that aims to minimize the size of the window in the recovery process

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Adaptive Algorithms for Automated Processing of Document Images

    Get PDF
    Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections. We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures. Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features. Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts

    Image reconstruction under non-Gaussian noise

    Get PDF

    Outlier robust corner-preserving methods for reconstructing noisy images

    Full text link
    The ability to remove a large amount of noise and the ability to preserve most structure are desirable properties of an image smoother. Unfortunately, they usually seem to be at odds with each other; one can only improve one property at the cost of the other. By combining M-smoothing and least-squares-trimming, the TM-smoother is introduced as a means to unify corner-preserving properties and outlier robustness. To identify edge- and corner-preserving properties, a new theory based on differential geometry is developed. Further, robustness concepts are transferred to image processing. In two examples, the TM-smoother outperforms other corner-preserving smoothers. A software package containing both the TM- and the M-smoother can be downloaded from the Internet.Comment: Published at http://dx.doi.org/10.1214/009053606000001109 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore