80,792 research outputs found

    Quality index for stereoscopic images by jointly evaluating cyclopean amplitude and cyclopean phase

    Get PDF
    With widespread applications of three-dimensional (3-D) technology, measuring quality of experience for 3-D multimedia content plays an increasingly important role. In this paper, we propose a full reference stereo image quality assessment (SIQA) framework which focuses on the innovation of binocular visual properties and applications of low-level features. On one hand, based on the fact that human visual system understands an image mainly according to its low-level features, local phase and local amplitude extracted from phase congruency measurement are employed as primary features. Considering the less prominent performance of amplitude in IQA, visual saliency is applied into the modification on amplitude. On the other hand, by fully considering binocular rivalry phenomena, we create the cyclopean amplitude map and cyclopean phase map. With this method, both image features and binocular visual properties are mutually combined with each other. Meanwhile, a novel binocular modulation function in spatial domain is also adopted into the overall quality prediction of amplitude and phase. Extensive experiments demonstrate that the proposed framework achieves higher consistency with subjective tests than relevant SIQA metrics

    A Detail Based Method for Linear Full Reference Image Quality Prediction

    Full text link
    In this paper, a novel Full Reference method is proposed for image quality assessment, using the combination of two separate metrics to measure the perceptually distinct impact of detail losses and of spurious details. To this purpose, the gradient of the impaired image is locally decomposed as a predicted version of the original gradient, plus a gradient residual. It is assumed that the detail attenuation identifies the detail loss, whereas the gradient residuals describe the spurious details. It turns out that the perceptual impact of detail losses is roughly linear with the loss of the positional Fisher information, while the perceptual impact of the spurious details is roughly proportional to a logarithmic measure of the signal to residual ratio. The affine combination of these two metrics forms a new index strongly correlated with the empirical Differential Mean Opinion Score (DMOS) for a significant class of image impairments, as verified for three independent popular databases. The method allowed alignment and merging of DMOS data coming from these different databases to a common DMOS scale by affine transformations. Unexpectedly, the DMOS scale setting is possible by the analysis of a single image affected by additive noise.Comment: 15 pages, 9 figures. Copyright notice: The paper has been accepted for publication on the IEEE Trans. on Image Processing on 19/09/2017 and the copyright has been transferred to the IEE

    Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

    Full text link
    We present a deep neural network-based approach to image quality assessment (IQA). The network is trained end-to-end and comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that: 1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and 2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the LIVE In the wild image quality challenge database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features
    • …
    corecore