1,090 research outputs found
A2-RL: Aesthetics Aware Reinforcement Learning for Image Cropping
Image cropping aims at improving the aesthetic quality of images by adjusting
their composition. Most weakly supervised cropping methods (without bounding
box supervision) rely on the sliding window mechanism. The sliding window
mechanism requires fixed aspect ratios and limits the cropping region with
arbitrary size. Moreover, the sliding window method usually produces tens of
thousands of windows on the input image which is very time-consuming. Motivated
by these challenges, we firstly formulate the aesthetic image cropping as a
sequential decision-making process and propose a weakly supervised Aesthetics
Aware Reinforcement Learning (A2-RL) framework to address this problem.
Particularly, the proposed method develops an aesthetics aware reward function
which especially benefits image cropping. Similar to human's decision making,
we use a comprehensive state representation including both the current
observation and the historical experience. We train the agent using the
actor-critic architecture in an end-to-end manner. The agent is evaluated on
several popular unseen cropping datasets. Experiment results show that our
method achieves the state-of-the-art performance with much fewer candidate
windows and much less time compared with previous weakly supervised methods.Comment: Accepted by CVPR 201
Supervised Deep Learning for Content-Aware Image Retargeting with Fourier Convolutions
Image retargeting aims to alter the size of the image with attention to the
contents. One of the main obstacles to training deep learning models for image
retargeting is the need for a vast labeled dataset. Labeled datasets are
unavailable for training deep learning models in the image retargeting tasks.
As a result, we present a new supervised approach for training deep learning
models. We use the original images as ground truth and create inputs for the
model by resizing and cropping the original images. A second challenge is
generating different image sizes in inference time. However, regular
convolutional neural networks cannot generate images of different sizes than
the input image. To address this issue, we introduced a new method for
supervised learning. In our approach, a mask is generated to show the desired
size and location of the object. Then the mask and the input image are fed to
the network. Comparing image retargeting methods and our proposed method
demonstrates the model's ability to produce high-quality retargeted images.
Afterward, we compute the image quality assessment score for each output image
based on different techniques and illustrate the effectiveness of our approach.Comment: 18 pages, 5 figure
Learning Visual Importance for Graphic Designs and Data Visualizations
Knowing where people look and click on visual designs can provide clues about
how the designs are perceived, and where the most important or relevant content
lies. The most important content of a visual design can be used for effective
summarization or to facilitate retrieval from a database. We present automated
models that predict the relative importance of different elements in data
visualizations and graphic designs. Our models are neural networks trained on
human clicks and importance annotations on hundreds of designs. We collected a
new dataset of crowdsourced importance, and analyzed the predictions of our
models with respect to ground truth importance and human eye movements. We
demonstrate how such predictions of importance can be used for automatic design
retargeting and thumbnailing. User studies with hundreds of MTurk participants
validate that, with limited post-processing, our importance-driven applications
are on par with, or outperform, current state-of-the-art methods, including
natural image saliency. We also provide a demonstration of how our importance
predictions can be built into interactive design tools to offer immediate
feedback during the design process
Representations and representation learning for image aesthetics prediction and image enhancement
With the continual improvement in cell phone cameras and improvements in the connectivity of mobile devices, we have seen an exponential increase in the images that are captured, stored and shared on social media. For example, as of July 1st 2017 Instagram had over 715 million registered users which had posted just shy of 35 billion images. This represented approximately seven and nine-fold increase in the number of users and photos present on Instagram since 2012. Whether the images are stored on personal computers or reside on social networks (e.g. Instagram, Flickr), the sheer number of images calls for methods to determine various image properties, such as object presence or appeal, for the purpose of automatic image management and curation. One of the central problems in consumer photography centers around determining the aesthetic appeal of an image and motivates us to explore questions related to understanding aesthetic preferences, image enhancement and the possibility of using such models on devices with constrained resources.
In this dissertation, we present our work on exploring representations and representation learning approaches for aesthetic inference, composition ranking and its application to image enhancement. Firstly, we discuss early representations that mainly consisted of expert features, and their possibility to enhance Convolutional Neural Networks (CNN). Secondly, we discuss the ability of resource-constrained CNNs, and the different architecture choices (inputs size and layer depth) in solving various aesthetic inference tasks: binary classification, regression, and image cropping. We show that if trained for solving fine-grained aesthetics inference, such models can rival the cropping performance of other aesthetics-based croppers, however they fall short in comparison to models trained for composition ranking. Lastly, we discuss our work on exploring and identifying the design choices in training composition ranking functions, with the goal of using them for image composition enhancement
Image Cropping under Design Constraints
Image cropping is essential in image editing for obtaining a compositionally
enhanced image. In display media, image cropping is a prospective technique for
automatically creating media content. However, image cropping for media
contents is often required to satisfy various constraints, such as an aspect
ratio and blank regions for placing texts or objects. We call this problem
image cropping under design constraints. To achieve image cropping under design
constraints, we propose a score function-based approach, which computes scores
for cropped results whether aesthetically plausible and satisfies design
constraints. We explore two derived approaches, a proposal-based approach, and
a heatmap-based approach, and we construct a dataset for evaluating the
performance of the proposed approaches on image cropping under design
constraints. In experiments, we demonstrate that the proposed approaches
outperform a baseline, and we observe that the proposal-based approach is
better than the heatmap-based approach under the same computation cost, but the
heatmap-based approach leads to better scores by increasing computation cost.
The experimental results indicate that balancing aesthetically plausible
regions and satisfying design constraints is not a trivial problem and requires
sensitive balance, and both proposed approaches are reasonable alternatives.Comment: ACMMM Asia accepte
Self-Play Reinforcement Learning for Fast Image Retargeting
In this study, we address image retargeting, which is a task that adjusts
input images to arbitrary sizes. In one of the best-performing methods called
MULTIOP, multiple retargeting operators were combined and retargeted images at
each stage were generated to find the optimal sequence of operators that
minimized the distance between original and retargeted images. The limitation
of this method is in its tremendous processing time, which severely prohibits
its practical use. Therefore, the purpose of this study is to find the optimal
combination of operators within a reasonable processing time; we propose a
method of predicting the optimal operator for each step using a reinforcement
learning agent. The technical contributions of this study are as follows.
Firstly, we propose a reward based on self-play, which will be insensitive to
the large variance in the content-dependent distance measured in MULTIOP.
Secondly, we propose to dynamically change the loss weight for each action to
prevent the algorithm from falling into a local optimum and from choosing only
the most frequently used operator in its training. Our experiments showed that
we achieved multi-operator image retargeting with less processing time by three
orders of magnitude and the same quality as the original multi-operator-based
method, which was the best-performing algorithm in retargeting tasks.Comment: Accepted to ACM Multimedia 202
Enhancing Perceptual Attributes with Bayesian Style Generation
Deep learning has brought an unprecedented progress in computer vision and
significant advances have been made in predicting subjective properties
inherent to visual data (e.g., memorability, aesthetic quality, evoked
emotions, etc.). Recently, some research works have even proposed deep learning
approaches to modify images such as to appropriately alter these properties.
Following this research line, this paper introduces a novel deep learning
framework for synthesizing images in order to enhance a predefined perceptual
attribute. Our approach takes as input a natural image and exploits recent
models for deep style transfer and generative adversarial networks to change
its style in order to modify a specific high-level attribute. Differently from
previous works focusing on enhancing a specific property of a visual content,
we propose a general framework and demonstrate its effectiveness in two use
cases, i.e. increasing image memorability and generating scary pictures. We
evaluate the proposed approach on publicly available benchmarks, demonstrating
its advantages over state of the art methods.Comment: ACCV-201
- …