59,047 research outputs found

    Evaluation of Blur and Gaussian Noise Degradation in Images Using Statistical Model of Natural Scene and Perceptual Image Quality Measure

    Get PDF
    In this paper we present new method for classification of image degradation type based on Riesz transform coefficients and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) that employs spatial coefficients. In our method we use additional statistical parameters that gives us statistically better results for blur and all tested degradations together in comparison with previous method. A new method to determine level of blur and Gaussian noise degradation in images using statistical model of natural scene is presented. We defined parameters for evaluation of level of Gaussian noise and blur degradation in images. In real world applications reference image is usually not available therefore proposed method enables classification of image degradation by type and estimation of Gaussian noise and blur levels for any degraded image

    Analysis of Blind Image Quality Index

    Get PDF
    Image quality index is the measure for estimating the level of degradation present in an image. Measurement of such index is challenging in the absence of reference image. Blind image quality assessment refers to evaluating the quality of an image without the need of any reference image. The quality of an image can be considered as the contrast, sharpness, brightness and other features extracted from that particular image. Other features like Discrete Cosine Transform (DCT), Wavelet Transform and Gabor filtering can also be used to extract the quality of an image. Different algorithms are developed by researchers to solve the quality evaluation problem. These algorithms are not tested on a common platform. The algorithms that are analyzed in this thesis are Blind Image Quality Index (BIQI), Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE), BLind Image Integrity Notator using DCT Statistics (BLIINDS) & Visual Codebook. Laboratory for Image & Video Engineering (LIVE) database which is a standard database is used to analyze the mentioned algorithms. Spearman and Pearson correlation coefficients are used for validating the algorithms. Recently Visual Codebook algorithm was proposed by Peng Ye and David Doermann. The existing Visual Codebook algorithm is optimized with respect to the number of clusters used in K-Means clustering part of algorithm. Effect of variation in patch size on the performance of algorithm is studied in this thesis and an optimum value of patch size is proposed

    Image blur estimation based on the average cone of ratio in the wavelet domain

    Get PDF
    In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods

    "Zero-Shot" Super-Resolution using Deep Internal Learning

    Full text link
    Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method
    corecore