168,912 research outputs found
Image Aesthetics Assessment Using Composite Features from off-the-Shelf Deep Models
Deep convolutional neural networks have recently achieved great success on
image aesthetics assessment task. In this paper, we propose an efficient method
which takes the global, local and scene-aware information of images into
consideration and exploits the composite features extracted from corresponding
pretrained deep learning models to classify the derived features with support
vector machine. Contrary to popular methods that require fine-tuning or
training a new model from scratch, our training-free method directly takes the
deep features generated by off-the-shelf models for image classification and
scene recognition. Also, we analyzed the factors that could influence the
performance from two aspects: the architecture of the deep neural network and
the contribution of local and scene-aware information. It turns out that deep
residual network could produce more aesthetics-aware image representation and
composite features lead to the improvement of overall performance. Experiments
on common large-scale aesthetics assessment benchmarks demonstrate that our
method outperforms the state-of-the-art results in photo aesthetics assessment.Comment: Accepted by ICIP 201
Learned Perceptual Image Enhancement
Learning a typical image enhancement pipeline involves minimization of a loss
function between enhanced and reference images. While L1 and L2 losses are
perhaps the most widely used functions for this purpose, they do not
necessarily lead to perceptually compelling results. In this paper, we show
that adding a learned no-reference image quality metric to the loss can
significantly improve enhancement operators. This metric is implemented using a
CNN (convolutional neural network) trained on a large-scale dataset labelled
with aesthetic preferences of human raters. This loss allows us to conveniently
perform back-propagation in our learning framework to simultaneously optimize
for similarity to a given ground truth reference and perceptual quality. This
perceptual loss is only used to train parameters of image processing operators,
and does not impose any extra complexity at inference time. Our experiments
demonstrate that this loss can be effective for tuning a variety of operators
such as local tone mapping and dehazing
- …