3 research outputs found

    Stereoscopic image quality assessment by deep convolutional neural network

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.jvcir.2018.12.006. © 2018 This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper, we propose a no-reference (NR) quality assessment method for stereoscopic images by deep convolutional neural network (DCNN). Inspired by the internal generative mechanism (IGM) in the human brain, which shows that the brain first analyzes the perceptual information and then extract effective visual information. Meanwhile, in order to simulate the inner interaction process in the human visual system (HVS) when perceiving the visual quality of stereoscopic images, we construct a two-channel DCNN to evaluate the visual quality of stereoscopic images. First, we design a Siamese Network to extract high-level semantic features of left- and right-view images for simulating the process of information extraction in the brain. Second, to imitate the information interaction process in the HVS, we combine the high-level features of left- and right-view images by convolutional operations. Finally, the information after interactive processing is used to estimate the visual quality of stereoscopic image. Experimental results show that the proposed method can estimate the visual quality of stereoscopic images accurately, which also demonstrate the effectiveness of the proposed two-channel convolutional neural network in simulating the perception mechanism in the HVS.This work was supported in part by the Natural Science Foundation of China under Grant 61822109 and 61571212, Fok Ying Tung Education Foundation under Grant 161061 and by the Natural Science Foundation of Jiangxi under Grant 20181BBH80002

    TTL-IQA: transitive transfer learning based no-reference image quality assessment

    Get PDF
    Image quality assessment (IQA) based on deep learning faces the overfitting problem due to limited training samples available in existing IQA databases. Transfer learning is a plausible solution to the problem, in which the shared features derived from the large-scale Imagenet source domain could be transferred from the original recognition task to the intended IQA task. However, the Imagenet source domain and the IQA target domain as well as their corresponding tasks are not directly related. In this paper, we propose a new transitive transfer learning method for no-reference image quality assessment (TTL-IQA). First, the architecture of the multi-domain transitive transfer learning for IQA is developed to transfer the Imagenet source domain to the auxiliary domain, and then to the IQA target domain. Second, the auxiliary domain and the auxiliary task are constructed by a new generative adversarial network based on distortion translation (DT-GAN). Furthermore, a TTL network of the semantic features transfer (SFTnet) is proposed to optimize the shared features for the TTL-IQA. Experiments are conducted to evaluate the performance of the proposed method on various IQA databases, including the LIVE, TID2013, CSIQ, LIVE multiply distorted and LIVE challenge. The results show that the proposed method significantly outperforms the state-of-the-art methods. In addition, our proposed method demonstrates a strong generalization ability
    corecore