1,798 research outputs found

    A Noise Density-Based Fuzzy Approach for Detecting and Removing Random Impulse Noise in Color Images

    Get PDF
    This paper introduces a new approach aimed at restoring images corrupted by random valued impulse noise. The adopted methodology leverages fuzzy logic and encompasses three primary stages: estimation of noise density, detection of fuzzy noise, and reduction of fuzzy noise. Within the fuzzy noise detection phase, a fuzzy set labeled as "Noise-Free" is formulated through the utilization of the rank-ordered mean of absolute differences and the estimated noise density. This set serves to discern whether a given pixel should be classified as noisy or noise-free. Utilizing the fuzzy logic in the proposed method collaborates to determine the ultimate fuzzy weight assigned to each pixel, thereby facilitating the restoration of corrupted image pixels. Empirical results based on peak signal-to-noise ratio, mean square error, and visual assessment demonstrate the effectiveness of the proposed technique in suppressing noise, preserving fine details, and surpassing the performance of several established filtering methods

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Denoising of impulse noise using partition-supported median, interpolation and DWT in dental X-ray images

    Get PDF
    The impulse noise often damages the human dental X-Ray images, leading to improper dental diagnosis. Hence, impulse noise removal in dental images is essential for a better subjective evaluation of human teeth. The existing denoising methods suffer from less restoration performance and less capacity to handle massive noise levels. This method suggests a novel denoising scheme called "Noise removal using Partition supported Median, Interpolation, and Discrete Wavelet Transform (NRPMID)" to address these issues. To effectively reduce the salt and pepper noise up to a range of 98.3 percent noise corruption, this method is applied over the surface of dental X-ray images based on techniques like mean filter, median filter, Bi-linear interpolation, Bi-Cubic interpolation, Lanczos interpolation, and Discrete Wavelet Transform (DWT). In terms of PSNR, IEF, and other metrics, the proposed noise removal algorithm greatly enhances the quality of dental X-ray images

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems

    Integrating IoT and Novel Approaches to Enhance Electromagnetic Image Quality using Modern Anisotropic Diffusion and Speckle Noise Reduction Techniques

    Get PDF
    Electromagnetic imaging is becoming more important in many sectors, and this requires high-quality pictures for reliable analysis. This study makes use of the complementary relationship between IoT and current image processing methods to improve the quality of electromagnetic images. The research presents a new framework for connecting Internet of Things sensors to imaging equipment, allowing for instantaneous input and adjustment. At the same time, the suggested system makes use of sophisticated anisotropic diffusion algorithms to bring out key details and hide noise in electromagnetic pictures. In addition, a cutting-edge technique for reducing speckle noise is used to combat this persistent issue in electromagnetic imaging. The effectiveness of the suggested system was determined via a comparison to standard imaging techniques. There was a noticeable improvement in visual sharpness, contrast, and overall clarity without any loss of information, as shown by the results. Incorporating IoT sensors also facilitated faster calibration and real-time modifications, which opened up new possibilities for use in contexts with a high degree of variation. In fields where electromagnetic imaging plays a crucial role, such as medicine, remote sensing, and aerospace, the ramifications of this study are far-reaching. Our research demonstrates how the Internet of Things (IoT) and cutting-edge image processing have the potential to dramatically improve the functionality and versatility of electromagnetic imaging systems

    PRIDNet based Image Denoising for Underwater Images

    Get PDF
    Underwater image enhancement has become a popular research topic due to its importance in aquatic robotics and marine engineering. However, the underwater images frequently experience signal-dependent speckle noise when transmitting and acquiring data, which can limit certain applications such as detection, object tracking. In the recent years, the existing underwater image enhancement algorithms efficiency has been analysed and evaluated on a small number of carefully chosen real-world images or synthetic datasets. As such, it is challenging to predict how these algorithms might function with images acquired in the wild under various circumstances. This paper introduces a new solution for noise removal from underwater images called Pyramid Real Image Noise Removal Network (PRIDNet) with patches.PRIDNet is a three-level network design using image patches. The tests were carried out on a dataset of actual noisy images demonstrate that, in terms of quantitative metrics, our proposed denoising model reduction performs better with the exixting denoisers. We determine the effectiveness and constraints of existing algorithms using benchmark assessments and the suggested model, offering valuable information for further studies on underwater image enhancement

    Tumor Segmentation and Classification Using Machine Learning Approaches

    Get PDF
    Medical image processing has recently developed progressively in terms of methodologies and applications to increase serviceability in health care management. Modern medical image processing employs various methods to diagnose tumors due to the burgeoning demand in the related industry. This study uses the PG-DBCWMF, the HV area method, and CTSIFT extraction to identify brain tumors that have been combined with pancreatic tumors. In terms of efficiency, precision, creativity, and other factors, these strategies offer improved performance in therapeutic settings. The three techniques, PG-DBCWMF, HV region algorithm, and CTSIFT extraction, are combined in the suggested method. The PG-DBCWMF (Patch Group Decision Couple Window Median Filter) works well in the preprocessing stage and eliminates noise. The HV region technique precisely calculates the vertical and horizontal angles of the known images. CTSIFT is a feature extraction method that recognizes the area of tumor images that is impacted. The brain tumor and pancreatic tumor databases, which produce the best PNSR, MSE, and other results, were used for the experimental evaluation

    Selected Algorithms of Quantitative Image Analysis for Measurements of Properties Characterizing Interfacial Interactions at High Temperatures.

    Get PDF
    In the case of every quantitative image analysis system a very important issue is to improve the quality of images to be analyzed, in other words, their pre-processing. As a result of pre-processing, the significant part of the redundant information and disturbances (which could originate from imperfect vision system components) should be removed from the image. Another particularly important problem to be solved is the right choice of image segmentation procedures. Segmentation essence is to divide an image into disjoint subsets that meet certain criteria for homogeneity (e.g. color, brightness or texture). The result of segmentation should allow the most precise determination of geometrical features of objects present in a scene with a minimum of computing effort. The measurement of geometric properties of objects present in the scene is the subject of image analysis
    corecore