10,037 research outputs found
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
We address personalization issues of image captioning, which have not been
discussed yet in previous research. For a query image, we aim to generate a
descriptive sentence, accounting for prior knowledge such as the user's active
vocabularies in previous documents. As applications of personalized image
captioning, we tackle two post automation tasks: hashtag prediction and post
generation, on our newly collected Instagram dataset, consisting of 1.1M posts
from 6.3K users. We propose a novel captioning model named Context Sequence
Memory Network (CSMN). Its unique updates over previous memory network models
include (i) exploiting memory as a repository for multiple types of context
information, (ii) appending previously generated words into memory to capture
long-term information without suffering from the vanishing gradient problem,
and (iii) adopting CNN memory structure to jointly represent nearby ordered
memory slots for better context understanding. With quantitative evaluation and
user studies via Amazon Mechanical Turk, we show the effectiveness of the three
novel features of CSMN and its performance enhancement for personalized image
captioning over state-of-the-art captioning models.Comment: Accepted paper at CVPR 201
Joint Regression and Ranking for Image Enhancement
Research on automated image enhancement has gained momentum in recent years,
partially due to the need for easy-to-use tools for enhancing pictures captured
by ubiquitous cameras on mobile devices. Many of the existing leading methods
employ machine-learning-based techniques, by which some enhancement parameters
for a given image are found by relating the image to the training images with
known enhancement parameters. While knowing the structure of the parameter
space can facilitate search for the optimal solution, none of the existing
methods has explicitly modeled and learned that structure. This paper presents
an end-to-end, novel joint regression and ranking approach to model the
interaction between desired enhancement parameters and images to be processed,
employing a Gaussian process (GP). GP allows searching for ideal parameters
using only the image features. The model naturally leads to a ranking technique
for comparing images in the induced feature space. Comparative evaluation using
the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on
an additional data-set were used to demonstrate the effectiveness of the
proposed approach.Comment: WACV 201
Video Data Visualization System: Semantic Classification And Personalization
We present in this paper an intelligent video data visualization tool, based
on semantic classification, for retrieving and exploring a large scale corpus
of videos. Our work is based on semantic classification resulting from semantic
analysis of video. The obtained classes will be projected in the visualization
space. The graph is represented by nodes and edges, the nodes are the keyframes
of video documents and the edges are the relation between documents and the
classes of documents. Finally, we construct the user's profile, based on the
interaction with the system, to render the system more adequate to its
references.Comment: graphic
Multimodal person recognition for human-vehicle interaction
Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies
Taste and the algorithm
Today, a consistent part of our everyday interaction with art and aesthetic artefacts occurs through digital media, and our preferences and choices are systematically tracked and analyzed by algorithms in ways that are far from transparent. Our consumption is constantly documented, and then, we are fed back through tailored information. We are therefore witnessing the emergence of a complex interrelation between our aesthetic choices, their digital elaboration, and also the production of content and the dynamics of creative processes. All are involved in a process of mutual influences, and are partially determined by the invisible guiding hand of algorithms.
With regard to this topic, this paper will introduce some key issues concerning the role of algorithms in aesthetic domains, such as taste detection and formation, cultural consumption and production, and showing how aesthetics can contribute to the ongoing debate about the impact of today’s “algorithmic culture”
Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized Enhancer for Low-Light Images
Low-light image enhancement is an inherently subjective process whose targets
vary with the user's aesthetic. Motivated by this, several personalized
enhancement methods have been investigated. However, the enhancement process
based on user preferences in these techniques is invisible, i.e., a "black
box". In this work, we propose an intelligible unsupervised personalized
enhancer (iUPEnhancer) for low-light images, which establishes the correlations
between the low-light and the unpaired reference images with regard to three
user-friendly attributions (brightness, chromaticity, and noise). The proposed
iUP-Enhancer is trained with the guidance of these correlations and the
corresponding unsupervised loss functions. Rather than a "black box" process,
our iUP-Enhancer presents an intelligible enhancement process with the above
attributions. Extensive experiments demonstrate that the proposed algorithm
produces competitive qualitative and quantitative results while maintaining
excellent flexibility and scalability. This can be validated by personalization
with single/multiple references, cross-attribution references, or merely
adjusting parameters.Comment: Accepted to ACM MM 202
- …