2,294 research outputs found

    Sorted Min-Max-Mean Filter for Removal of High Density Impulse Noise

    Get PDF
    This paper presents an improved Sorted-Min-Max-Mean Filter (SM3F) algorithm for detection and removal of impulse noise from highly corrupted image. This method uses a single algorithm for detection and removal of impulse noise. Identification of the corrupted pixels is performed by local extrema intensity in grayscale range and these corrupted pixels are removed from the image by applying SM3F operation. The uncorrupted pixels retain its value while corrupted pixel’s value will be changed by the mean value of noise-free pixels present within the selected window. Different images have been used to test the proposed method and it has been found better outcomes in terms of both quantitative measures and visual perception. For quantitative study of algorithm performance, Mean Square Error (MSE), Peak-Signal-to-Noise Ratio (PSNR) and image enhancement factor (IEF) have been used. Experimental observations show that the presented technique effectively removes high density impulse noise and also keeps the originality of pixel’s value. The performance of proposed filter is tested by varying noise density from 10% to 90% and it is observed that for impulse noise having 90% noise density, the maximum PSNR value of 30.03 dB has been achieved indicating better performance of the SM3F algorithm even at 90% noise level. The proposed filter is simple and can be used for grayscale as well as color images for image restoration

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Optimum Image Filters for Various Types of Noise

    Get PDF
    In this paper, the quality performance of several filters in restoration of images corrupted with various types of noise has been examined extensively. In particular, Wiener filter, Gaussian filter, median filter and averaging (mean) filter have been used to reduce Gaussian noise, speckle noise, salt and pepper noise and Poisson noise. Many images have been tested, two of which are shown in this paper. Several percentages of noise corrupting the images have been examined in the simulations. The size of the sliding window is the same in the four filters used, namely 5x5 for all the indicated noise percentages. For image quality measurement, two performance measuring indices are used: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The simulation results show that the performance of some specific filters in reducing some types of noise are much better than others. It has been illustrated that median filter is more appropriate for eliminating salt and pepper noise. Averaging filter still works well for such type of noise, but of less performance quality than the median filter. Gaussian and Wiener filters outperform other filters in restoring mages corrupted with Poisson and speckle noise

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems
    • …
    corecore