1,078 research outputs found
Multi-Modal Aesthetic Assessment for MObile Gaming Image
With the proliferation of various gaming technology, services, game styles,
and platforms, multi-dimensional aesthetic assessment of the gaming contents is
becoming more and more important for the gaming industry. Depending on the
diverse needs of diversified game players, game designers, graphical
developers, etc. in particular conditions, multi-modal aesthetic assessment is
required to consider different aesthetic dimensions/perspectives. Since there
are different underlying relationships between different aesthetic dimensions,
e.g., between the `Colorfulness' and `Color Harmony', it could be advantageous
to leverage effective information attached in multiple relevant dimensions. To
this end, we solve this problem via multi-task learning. Our inclination is to
seek and learn the correlations between different aesthetic relevant dimensions
to further boost the generalization performance in predicting all the aesthetic
dimensions. Therefore, the `bottleneck' of obtaining good predictions with
limited labeled data for one individual dimension could be unplugged by
harnessing complementary sources of other dimensions, i.e., augment the
training data indirectly by sharing training information across dimensions.
According to experimental results, the proposed model outperforms
state-of-the-art aesthetic metrics significantly in predicting four gaming
aesthetic dimensions.Comment: 5 page
Revisiting Image Aesthetic Assessment via Self-Supervised Feature Learning
Visual aesthetic assessment has been an active research field for decades.
Although latest methods have achieved promising performance on benchmark
datasets, they typically rely on a large number of manual annotations including
both aesthetic labels and related image attributes. In this paper, we revisit
the problem of image aesthetic assessment from the self-supervised feature
learning perspective. Our motivation is that a suitable feature representation
for image aesthetic assessment should be able to distinguish different
expert-designed image manipulations, which have close relationships with
negative aesthetic effects. To this end, we design two novel pretext tasks to
identify the types and parameters of editing operations applied to synthetic
instances. The features from our pretext tasks are then adapted for a one-layer
linear classifier to evaluate the performance in terms of binary aesthetic
classification. We conduct extensive quantitative experiments on three
benchmark datasets and demonstrate that our approach can faithfully extract
aesthetics-aware features and outperform alternative pretext schemes. Moreover,
we achieve comparable results to state-of-the-art supervised methods that use
10 million labels from ImageNet.Comment: AAAI Conference on Artificial Intelligence, 2020, accepte
Critical analysis on the reproducibility of visual quality assessment using deep features
Data used to train supervised machine learning models are commonly split into
independent training, validation, and test sets. In this paper we illustrate
that intricate cases of data leakage have occurred in the no-reference video
and image quality assessment literature. We show that the performance results
of several recently published journal papers that are well above the best
performances in related works, cannot be reached. Our analysis shows that
information from the test set was inappropriately used in the training process
in different ways. When correcting for the data leakage, the performances of
the approaches drop below the state-of-the-art by a large margin. Additionally,
we investigate end-to-end variations to the discussed approaches, which do not
improve upon the original.Comment: 20 pages, 7 figures, PLOS ONE journal. arXiv admin note: substantial
text overlap with arXiv:2005.0440
DeepFL-IQA: Weak Supervision for Deep IQA Feature Learning
Multi-level deep-features have been driving state-of-the-art methods for
aesthetics and image quality assessment (IQA). However, most IQA benchmarks are
comprised of artificially distorted images, for which features derived from
ImageNet under-perform. We propose a new IQA dataset and a weakly supervised
feature learning approach to train features more suitable for IQA of
artificially distorted images. The dataset, KADIS-700k, is far more extensive
than similar works, consisting of 140,000 pristine images, 25 distortions
types, totaling 700k distorted versions. Our weakly supervised feature learning
is designed as a multi-task learning type training, using eleven existing
full-reference IQA metrics as proxies for differential mean opinion scores. We
also introduce a benchmark database, KADID-10k, of artificially degraded
images, each subjectively annotated by 30 crowd workers. We make use of our
derived image feature vectors for (no-reference) image quality assessment by
training and testing a shallow regression network on this database and five
other benchmark IQA databases. Our method, termed DeepFL-IQA, performs better
than other feature-based no-reference IQA methods and also better than all
tested full-reference IQA methods on KADID-10k. For the other five benchmark
IQA databases, DeepFL-IQA matches the performance of the best existing
end-to-end deep learning-based methods on average.Comment: dataset url: http://database.mmsp-kn.d
- …