32,667 research outputs found

    Associations among Image Assessments as Cost Functions in Linear Decomposition: MSE, SSIM, and Correlation Coefficient

    Full text link
    The traditional methods of image assessment, such as mean squared error (MSE), signal-to-noise ratio (SNR), and Peak signal-to-noise ratio (PSNR), are all based on the absolute error of images. Pearson's inner-product correlation coefficient (PCC) is also usually used to measure the similarity between images. Structural similarity (SSIM) index is another important measurement which has been shown to be more effective in the human vision system (HVS). Although there are many essential differences among these image assessments, some important associations among them as cost functions in linear decomposition are discussed in this paper. Firstly, the selected bases from a basis set for a target vector are the same in the linear decomposition schemes with different cost functions MSE, SSIM, and PCC. Moreover, for a target vector, the ratio of the corresponding affine parameters in the MSE-based linear decomposition scheme and the SSIM-based scheme is a constant, which is just the value of PCC between the target vector and its estimated vector.Comment: 11 pages, 0 figure

    Sparse Representation-based Image Quality Assessment

    Full text link
    A successful approach to image quality assessment involves comparing the structural information between a distorted and its reference image. However, extracting structural information that is perceptually important to our visual system is a challenging task. This paper addresses this issue by employing a sparse representation-based approach and proposes a new metric called the \emph{sparse representation-based quality} (SPARQ) \emph{index}. The proposed method learns the inherent structures of the reference image as a set of basis vectors, such that any structure in the image can be represented by a linear combination of only a few of those basis vectors. This sparse strategy is employed because it is known to generate basis vectors that are qualitatively similar to the receptive field of the simple cells present in the mammalian primary visual cortex. The visual quality of the distorted image is estimated by comparing the structures of the reference and the distorted images in terms of the learnt basis vectors resembling cortical cells. Our approach is evaluated on six publicly available subject-rated image quality assessment datasets. The proposed SPARQ index consistently exhibits high correlation with the subjective ratings on all datasets and performs better or at par with the state-of-the-art.Comment: 10 pages, 3 figures, 3 tables, submitted to a journa

    A new image compression by gradient Haar wavelet

    Full text link
    With the development of human communications the usage of Visual Communications has also increased. The advancement of image compression methods is one of the main reasons for the enhancement. This paper first presents main modes of image compression methods such as JPEG and JPEG2000 without mathematical details. Also, the paper describes gradient Haar wavelet transforms in order to construct a preliminary image compression algorithm. Then, a new image compression method is proposed based on the preliminary image compression algorithm that can improve standards of image compression. The new method is compared with original modes of JPEG and JPEG2000 (based on Haar wavelet) by image quality measures such as MAE, PSNAR, and SSIM. The image quality and statistical results confirm that can boost image compression standards. It is suggested that the new method is used in a part or all of an image compression standard.Comment: 9 pages, 4 figures, 10 table

    Content-adaptive non-parametric texture similarity measure

    Full text link
    In this paper, we introduce a non-parametric texture similarity measure based on the singular value decomposition of the curvelet coefficients followed by a content-based truncation of the singular values. This measure focuses on images with repeating structures and directional content such as those found in natural texture images. Such textural content is critical for image perception and its similarity plays a vital role in various computer vision applications. In this paper, we evaluate the effectiveness of the proposed measure using a retrieval experiment. The proposed measure outperforms the state-of-the-art texture similarity metrics on CURet and PerTEx texture databases, respectively.Comment: 7 pages, 7 Figures, 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP

    Speckle Reduction in Polarimetric SAR Imagery with Stochastic Distances and Nonlocal Means

    Full text link
    This paper presents a technique for reducing speckle in Polarimetric Synthetic Aperture Radar (PolSAR) imagery using Nonlocal Means and a statistical test based on stochastic divergences. The main objective is to select homogeneous pixels in the filtering area through statistical tests between distributions. This proposal uses the complex Wishart model to describe PolSAR data, but the technique can be extended to other models. The weights of the location-variant linear filter are function of the p-values of tests which verify the hypothesis that two samples come from the same distribution and, therefore, can be used to compute a local mean. The test stems from the family of (h-phi) divergences which originated in Information Theory. This novel technique was compared with the Boxcar, Refined Lee and IDAN filters. Image quality assessment methods on simulated and real data are employed to validate the performance of this approach. We show that the proposed filter also enhances the polarimetric entropy and preserves the scattering information of the targets.Comment: Accepted for publication in Pattern Recognitio

    Fast and Efficient Zero-Learning Image Fusion

    Full text link
    We propose a real-time image fusion method using pre-trained neural networks. Our method generates a single image containing features from multiple sources. We first decompose images into a base layer representing large scale intensity variations, and a detail layer containing small scale changes. We use visual saliency to fuse the base layers, and deep feature maps extracted from a pre-trained neural network to fuse the detail layers. We conduct ablation studies to analyze our method's parameters such as decomposition filters, weight construction methods, and network depth and architecture. Then, we validate its effectiveness and speed on thermal, medical, and multi-focus fusion. We also apply it to multiple image inputs such as multi-exposure sequences. The experimental results demonstrate that our technique achieves state-of-the-art performance in visual quality, objective assessment, and runtime efficiency.Comment: 13 pages, 10 figure

    Compressed Image Quality Assessment Based on Saak Features

    Full text link
    Compressed image quality assessment plays an important role in image services, especially in image compression applications, which can be utilized as a guidance to optimize image processing algorithms. In this paper, we propose an objective image quality assessment algorithm to measure the quality of compressed images. The proposed method utilizes a data-driven transform, Saak (Subspace approximation with augmented kernels), to decompose images into hierarchical structural feature space. We measure the distortions of Saak features and accumulate these distortions according to the feature importance to human visual system. Compared with the state-of-the-art image quality assessment methods on widely utilized datasets, the proposed method correlates better with the subjective results. In addition, the proposed methods achieves more robust results on different datasets

    Real-Time Adaptive Image Compression

    Full text link
    We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.Comment: Published at ICML 201

    CSV: Image Quality Assessment Based on Color, Structure, and Visual System

    Full text link
    This paper presents a full-reference image quality estimator based on color, structure, and visual system characteristics denoted as CSV. In contrast to the majority of existing methods, we quantify perceptual color degradations rather than absolute pixel-wise changes. We use the CIEDE2000 color difference formulation to quantify low-level color degradations and the Earth Mover's Distance between color name descriptors to measure significant color degradations. In addition to the perceptual color difference, CSV also contains structural and perceptual differences. Structural feature maps are obtained by mean subtraction and divisive normalization, and perceptual feature maps are obtained from contrast sensitivity formulations of retinal ganglion cells. The proposed quality estimator CSV is tested on the LIVE, the Multiply Distorted LIVE, and the TID 2013 databases, and it is always among the top two performing quality estimators in terms of at least ranking, monotonic behavior or linearity.Comment: 31 pages, 9 figures, 7 table

    No Reference Stereoscopic Video Quality Assessment Using Joint Motion and Depth Statistics

    Full text link
    We present a no reference (NR) quality assessment algorithm for assessing the perceptual quality of natural stereoscopic 3D (S3D) videos. This work is inspired by our finding that the joint statistics of the subband coefficients of motion (optical flow or motion vector magnitude) and depth (disparity map) of natural S3D videos possess a unique signature. Specifically, we empirically show that the joint statistics of the motion and depth subband coefficients of S3D video frames can be modeled accurately using a Bivariate Generalized Gaussian Distribution (BGGD). We then demonstrate that the parameters of the BGGD model possess the ability to discern quality variations in S3D videos. Therefore, the BGGD model parameters are employed as motion and depth quality features. In addition to these features, we rely on a frame level spatial quality feature that is computed using a robust off the shelf NR image quality assessment (IQA) algorithm. These frame level motion, depth and spatial features are consolidated and used with the corresponding S3D video's difference mean opinion score (DMOS) labels for supervised learning using support vector regression (SVR). The overall quality of an S3D video is computed by averaging the frame level quality predictions of the constituent video frames. The proposed algorithm, dubbed Video QUality Evaluation using MOtion and DEpth Statistics (VQUEMODES) is shown to outperform the state of the art methods when evaluated over the IRCCYN and LFOVIA S3D subjective quality assessment databases.Comment: 13 PAGES, 7 FIGURES, 7 TABLE
    corecore