14,699 research outputs found

    Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

    Full text link
    We present a deep neural network-based approach to image quality assessment (IQA). The network is trained end-to-end and comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that: 1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and 2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the LIVE In the wild image quality challenge database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features

    Video streaming

    Get PDF
    B

    Image quality assessment based on harmonics gain/loss information

    Get PDF
    We present an objective reduced-reference image quality assessment method based on harmonic gain/loss information through a discriminative analysis of local harmonic strength (LHS). The LHS is computed from the gradient of images, and its value represents a relative degree of the appearance of blockiness on images when it is related to energy gain within an image. Furthermore, comparison between local harmonic strength values from an original, distortion-free image and a degraded, processed, or compressed version of the image shows that the LHS can also be used to indicate other types of degradations, such as blurriness that corresponds with energy loss. Our simulations show that we can develop a single metric based on this gain/loss information and use it to rate the quality of images encoded by various encoders such as DCT-based JPEG, wavelet-based JPEG 2000, or various processed images. We show that our method can overcome some limitations of the traditional PSNR
    • …
    corecore