294 research outputs found

    Low rank prior in single patches for non-pointwise impulse noise removal

    Get PDF

    Detail-preserving switching algorithm for the removal of random-valued impulse noise

    Get PDF
    © 2018, Springer-Verlag GmbH Germany, part of Springer Nature. This paper presents a new algorithm for the denoising of images corrupted with random-valued impulse noise (RVIN). It employs a switching approach that identifies the noisy pixels in the first stage and then estimates their intensity values to restore them. Local statistics of the textons in distinct orientations of the sliding window are exploited to identify the corrupted pixels in an iterative manner; using an adaptive threshold range. Textons are formed by using an isometric grid of minimum local distance that preserves the texture and edge pixels of an image, effectively. At the noise filtering stage, fuzzy rules are used to obtain the noise-free pixels from the proposed tri-directional pixels to estimate the intensity values of identified corrupted pixels. The performance of the proposed denoising algorithm is evaluated on a variety of standard gray-scale images under various intensities of RVIN by comparing it with state-of-the-art denoising methods. The proposed denoising algorithm also has robust denoising and restoration power on biomedical images such as, MRI, X-Ray and CT-Scan. The extensive simulation results based on both quantitative measures and visual representations depict the superior performance of the proposed denoising algorithm for various noise intensities

    Segmentation-Driven Tomographic Reconstruction.

    Get PDF

    An Impressive Method to Get Better Peak Signal Noise Ratio (PSNR), Mean Square Error (MSE) Values Using Stationary Wavelet Transform (SWT)

    Get PDF
    Impulse noise in images is present because of bit errors in transmission or introduced during the signal acquisition stage. There are two types of impulse noise, they are salt and pepper noise and random valued noise. In our proposed method, first we apply the Stationary wavelet transform for noise added image. It will separate into four bands like LL, LH, HL and HH. The proposed algorithm replaces the noisy pixel by trimmed median value when other pixel values, 02019;s and 2552019;s are present in the selected window and when all the pixel values are 02019;s and 2552019;s then the noise pixel is replaced by mean value of all the elements present in the selected window. This proposed algorithm shows better results than the Standard median filter (MF), decision based algorithm (DBA). The proposed method performs well in removing low to medium density impulse noise with detail preservation up to a noise density of 70% and it gives better Peak signal-to-noise ratio (PSNR) and mean square error (MSE) values

    Multiresolution image models and estimation techniques

    Get PDF

    Plant Leaf Disease Detection Using Efficient Image Processing and Machine Learning Algorithms

    Get PDF
    India is often described as a country of villages, where a majority of the population depends on agriculture for their livelihood. The landscape of Indian agriculture is approximately 159.7 million hectares. Agriculture plays a pivotal role in India's Gross Domestic Product (GDP), accounting for about 18% of the nation's economic output. Diseases and pests can have detrimental effects on crops, leading to reduced yields. These challenges can include the spread of plant diseases, infestations by insects or other pests, and the overall degradation of crop health. Early detection of diseases in crops is crucial for several reasons. Detecting diseases at an early stage allows for prompt intervention, such as applying appropriate pesticides or taking preventive measures. The main aim of this study is to develop a highly effective method for plant leaf disease detection using computer vision techniques. Here, leaf disease detection comprises histogram equalization, denoising, image color threshold masking, feature descriptors such as Haralick textures, Hu moments, and color histograms to extract the salient features of leaf images. These features are then used to classify the images by training Logistic Regression, Linear Discriminant Analysis, K-nearest neighbor, decision tree, Random Forest, and Support Vector Machine algorithms using K-fold validation. K-fold validation is used to separate the validation samples from the training samples, and the K indicates the number of times this is repeated for the generalization. The training and validation processes are performed in two approaches. The first approach uses default hyperparameters with segmented and non-segmented images. In the second approach, all hyperparameters of the models are optimized to train segmented datasets. The classification accuracy improved by 2.19% by utilizing segmentation and hyperparameter tuning further improved by 0.48%. The highest average classification accuracy of 97.92% is achieved using the Random Forest classifier to classify 40 classes of 10 different plant species. Accurate detection of plant disease leads to the sustained growth of plants throughout the growing span of the plants

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure
    corecore