5,809 research outputs found
Aesthetic-Driven Image Enhancement by Adversarial Learning
We introduce EnhanceGAN, an adversarial learning based model that performs
automatic image enhancement. Traditional image enhancement frameworks typically
involve training models in a fully-supervised manner, which require expensive
annotations in the form of aligned image pairs. In contrast to these
approaches, our proposed EnhanceGAN only requires weak supervision (binary
labels on image aesthetic quality) and is able to learn enhancement operators
for the task of aesthetic-based image enhancement. In particular, we show the
effectiveness of a piecewise color enhancement module trained with weak
supervision, and extend the proposed EnhanceGAN framework to learning a deep
filtering-based aesthetic enhancer. The full differentiability of our image
enhancement operators enables the training of EnhanceGAN in an end-to-end
manner. We further demonstrate the capability of EnhanceGAN in learning
aesthetic-based image cropping without any groundtruth cropping pairs. Our
weakly-supervised EnhanceGAN reports competitive quantitative results on
aesthetic-based color enhancement as well as automatic image cropping, and a
user study confirms that our image enhancement results are on par with or even
preferred over professional enhancement
Time-Efficient Hybrid Approach for Facial Expression Recognition
Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database
Fast-AT: Fast Automatic Thumbnail Generation using Deep Neural Networks
Fast-AT is an automatic thumbnail generation system based on deep neural
networks. It is a fully-convolutional deep neural network, which learns
specific filters for thumbnails of different sizes and aspect ratios. During
inference, the appropriate filter is selected depending on the dimensions of
the target thumbnail. Unlike most previous work, Fast-AT does not utilize
saliency but addresses the problem directly. In addition, it eliminates the
need to conduct region search on the saliency map. The model generalizes to
thumbnails of different sizes including those with extreme aspect ratios and
can generate thumbnails in real time. A data set of more than 70,000 thumbnail
annotations was collected to train Fast-AT. We show competitive results in
comparison to existing techniques
User Constrained Thumbnail Generation using Adaptive Convolutions
Thumbnails are widely used all over the world as a preview for digital
images. In this work we propose a deep neural framework to generate thumbnails
of any size and aspect ratio, even for unseen values during training, with high
accuracy and precision. We use Global Context Aggregation (GCA) and a modified
Region Proposal Network (RPN) with adaptive convolutions to generate thumbnails
in real time. GCA is used to selectively attend and aggregate the global
context information from the entire image while the RPN is used to predict
candidate bounding boxes for the thumbnail image. Adaptive convolution
eliminates the problem of generating thumbnails of various aspect ratios by
using filter weights dynamically generated from the aspect ratio information.
The experimental results indicate the superior performance of the proposed
model over existing state-of-the-art techniques.Comment: International Conference on Acoustics, Speech, and Signal
Processing(ICASSP), 201
- …