3 research outputs found

    A deep neural network for image quality assessment

    No full text
    This paper presents a no reference image (NR) quality assessment (IQA) method based on a deep convolutional neural network (CNN). The CNN takes unpreprocessed image patches as an input and estimates the quality without employing any domain knowledge. By that, features and natural scene statistics are learnt purely data driven and combined with pooling and regression in one framework. We evaluate the network on the LIVE database and achieve a linear Pearson correlation superior to state-of-the-art NR IQA methods. We also apply the network to the image forensics task of decoder-sided quantization parameter estimation and also here achieve correlations of r = 0.989

    Neural network-based full-reference image quality assessment

    No full text
    This paper presents a full-reference (FR) image quality assessment (IQA) method based on a deep convolutional neural network (CNN). The CNN extracts features from distorted and reference image patches and estimates the perceived quality of the distorted ones by combining and regressing the feature vectors using two fully connected layers. The CNN consists of 12 convolution and max-pooling layers; activation is done by a rectifier activation function (ReLU). The overall IQA score is computed by aggregating the patch quality estimates. Three different feature combination methods and two aggregation approaches are proposed and evaluated in this paper. Experiments are performed on the LIVE and TID2013 databases. On both databases linear Pearson correlations superior to state-of-the-art IQA methods are achieved

    Neural network based intra prediction for video coding

    No full text
    Today’s hybrid video coding systems typically perform an intra-picture prediction whereby blocks of samples are predicted from previously decoded samples of the same picture. For example, HEVC uses a set of angular prediction patterns to exploit directional sample correlations. In this paper, we propose new intra-picture prediction modes whose construction consists of two steps: First, a set of features is extracted from the decoded samples. Second, these features are used to select a predefined image pattern as the prediction signal. Since several intra prediction modes are proposed for each block-shape, a specific signalization scheme is also proposed. Our intra prediction modes lead to significant coding gains over state of the art video coding technologies
    corecore