5,360 research outputs found

    No-reference image quality assessment through the von Mises distribution

    Get PDF
    An innovative way of calculating the von Mises distribution (VMD) of image entropy is introduced in this paper. The VMD's concentration parameter and some fitness parameter that will be later defined, have been analyzed in the experimental part for determining their suitability as a image quality assessment measure in some particular distortions such as Gaussian blur or additive Gaussian noise. To achieve such measure, the local R\'{e}nyi entropy is calculated in four equally spaced orientations and used to determine the parameters of the von Mises distribution of the image entropy. Considering contextual images, experimental results after applying this model show that the best-in-focus noise-free images are associated with the highest values for the von Mises distribution concentration parameter and the highest approximation of image data to the von Mises distribution model. Our defined von Misses fitness parameter experimentally appears also as a suitable no-reference image quality assessment indicator for no-contextual images.Comment: 29 pages, 11 figure

    Learned Perceptual Image Enhancement

    Full text link
    Learning a typical image enhancement pipeline involves minimization of a loss function between enhanced and reference images. While L1 and L2 losses are perhaps the most widely used functions for this purpose, they do not necessarily lead to perceptually compelling results. In this paper, we show that adding a learned no-reference image quality metric to the loss can significantly improve enhancement operators. This metric is implemented using a CNN (convolutional neural network) trained on a large-scale dataset labelled with aesthetic preferences of human raters. This loss allows us to conveniently perform back-propagation in our learning framework to simultaneously optimize for similarity to a given ground truth reference and perceptual quality. This perceptual loss is only used to train parameters of image processing operators, and does not impose any extra complexity at inference time. Our experiments demonstrate that this loss can be effective for tuning a variety of operators such as local tone mapping and dehazing
    corecore