684 research outputs found

    Sorted Min-Max-Mean Filter for Removal of High Density Impulse Noise

    Get PDF
    This paper presents an improved Sorted-Min-Max-Mean Filter (SM3F) algorithm for detection and removal of impulse noise from highly corrupted image. This method uses a single algorithm for detection and removal of impulse noise. Identification of the corrupted pixels is performed by local extrema intensity in grayscale range and these corrupted pixels are removed from the image by applying SM3F operation. The uncorrupted pixels retain its value while corrupted pixel’s value will be changed by the mean value of noise-free pixels present within the selected window. Different images have been used to test the proposed method and it has been found better outcomes in terms of both quantitative measures and visual perception. For quantitative study of algorithm performance, Mean Square Error (MSE), Peak-Signal-to-Noise Ratio (PSNR) and image enhancement factor (IEF) have been used. Experimental observations show that the presented technique effectively removes high density impulse noise and also keeps the originality of pixel’s value. The performance of proposed filter is tested by varying noise density from 10% to 90% and it is observed that for impulse noise having 90% noise density, the maximum PSNR value of 30.03 dB has been achieved indicating better performance of the SM3F algorithm even at 90% noise level. The proposed filter is simple and can be used for grayscale as well as color images for image restoration

    Computational scrutiny of image denoising method found on DBAMF under SPN surrounding

    Get PDF
    Traditionally, rank order absolute difference (ROAD) has a great similarity capacity for identifying whether the pixel is SPN or noiseless because statistical characteristic of ROAD is desired for a noise identifying objective. As a result, the decision based adaptive median filter (DBAMF) that is found on ROAD technique has been initially proposed for eliminating an impulsive noise since 2010. Consequently, this analyzed report focuses to examine the similarity capacity of denoising method found on DBAMF for diverse SPN Surrounding. In order to examine the denoising capacity and its obstruction of the denoising method found on DBAMF, the four original digital images, comprised of Airplane, Pepper, Girl and Lena, are examined in these computational simulation for SPN surrounding by initially contaminating the SPN with diverse intensity. Later, all contaminated digital images are denoised by the denoising method found on DBAMF. In addition, the proposed denoised image, which is computed by this DBAMF denoising method, is confronted with the other denoised images, which is computed by Standard median filter (SMF), Gaussian Filter and Adaptive median filter (AMF) for demonstrating the DBAMF capacity under subjective measurement aspect

    Hybrid filtering technique to remove noise of high density from digital images

    Get PDF
    Noise removal is one of the greatest challenges among the researchers, noise removal algorithms vary with the application areas and the type of images and noises. The work proposes a novel hybrid filter which is capable of predicting the best filter for every pixel using neural network and choose the best technique to remove noise with 3x3 mask operation. Proposed algorithm first train the neural network for various filters like mean, median, mode, geometrical mean, arithmetic mean and will use to remove noise later on. Later, the proposed method is compared with the existing techniques using the parameters MAE, PSNR, MSE and IEF. The experimental result shows that proposed method gives better performance in comparison with MF, AMF and other existing noise removal algorithms and improves the values of various parameter

    Effect of cooking time on physical properties of almond milk-based lemak cili api gravy

    Get PDF
    One of the crucial elements in developing or reformulating product is to maintain the quality throughout its entire shelf life. This study aims to determine the effect of different cooking time on the almond milk-based of lemak cili api gravy. Various cooking times of 5, 10, 15, 20, 25 and 30 minutes were employed to the almond milk-based lemak cili api gravy followed by determination of their effects on physical properties such as total soluble solids content, pH and colour. pH was determined by using a pH meter. Refractometer was used to evaluate the total soluble solids content of almond milk-based lemak cili api gravy. The colours were determined by using spectrophotometer which expressed as L*, a* and b* values. Results showed that almond milk-based lemak cili api gravy has constant values of total soluble solids with pH range of 5 to 6, which can be classified as low acid food. Colour analysis showed that the lightness (L*) and yellowness (b*) are significantly increased while redness (a*) decreased. In conclusion, this study shows that physical properties of almond milk-based lemak cili api gravy changes by increasing the cooking time

    Denoising of impulse noise using partition-supported median, interpolation and DWT in dental X-ray images

    Get PDF
    The impulse noise often damages the human dental X-Ray images, leading to improper dental diagnosis. Hence, impulse noise removal in dental images is essential for a better subjective evaluation of human teeth. The existing denoising methods suffer from less restoration performance and less capacity to handle massive noise levels. This method suggests a novel denoising scheme called "Noise removal using Partition supported Median, Interpolation, and Discrete Wavelet Transform (NRPMID)" to address these issues. To effectively reduce the salt and pepper noise up to a range of 98.3 percent noise corruption, this method is applied over the surface of dental X-ray images based on techniques like mean filter, median filter, Bi-linear interpolation, Bi-Cubic interpolation, Lanczos interpolation, and Discrete Wavelet Transform (DWT). In terms of PSNR, IEF, and other metrics, the proposed noise removal algorithm greatly enhances the quality of dental X-ray images

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Adaptive Algorithms for Automated Processing of Document Images

    Get PDF
    Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections. We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures. Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features. Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts
    • …
    corecore