4,621 research outputs found
Beyond Classification: Latent User Interests Profiling from Visual Contents Analysis
User preference profiling is an important task in modern online social
networks (OSN). With the proliferation of image-centric social platforms, such
as Pinterest, visual contents have become one of the most informative data
streams for understanding user preferences. Traditional approaches usually
treat visual content analysis as a general classification problem where one or
more labels are assigned to each image. Although such an approach simplifies
the process of image analysis, it misses the rich context and visual cues that
play an important role in people's perception of images. In this paper, we
explore the possibilities of learning a user's latent visual preferences
directly from image contents. We propose a distance metric learning method
based on Deep Convolutional Neural Networks (CNN) to directly extract
similarity information from visual contents and use the derived distance metric
to mine individual users' fine-grained visual preferences. Through our
preliminary experiments using data from 5,790 Pinterest users, we show that
even for the images within the same category, each user possesses distinct and
individually-identifiable visual preferences that are consistent over their
lifetime. Our results underscore the untapped potential of finer-grained visual
preference profiling in understanding users' preferences.Comment: 2015 IEEE 15th International Conference on Data Mining Workshop
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch is a novel algorithm for searching large image collections using
hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch
search by creating visual suggestions that augment the query as it is drawn,
making query specification an iterative rather than one-shot process that helps
disambiguate users' search intent. Our technical contributions are: a triplet
convnet architecture that incorporates an RNN based variational autoencoder to
search for images using vector (stroke-based) queries; real-time clustering to
identify likely search intents (and so, targets within the search embedding);
and the use of backpropagation from those targets to perturb the input stroke
sequence, so suggesting alterations to the query in order to guide the search.
We show improvements in accuracy and time-to-task over contemporary baselines
using a 67M image corpus.Comment: Accepted to CVPR 201
Towards Task Understanding in Visual Settings
We consider the problem of understanding real world tasks depicted in visual
images. While most existing image captioning methods excel in producing natural
language descriptions of visual scenes involving human tasks, there is often
the need for an understanding of the exact task being undertaken rather than a
literal description of the scene. We leverage insights from real world task
understanding systems, and propose a framework composed of convolutional neural
networks, and an external hierarchical task ontology to produce task
descriptions from input images. Detailed experiments highlight the efficacy of
the extracted descriptions, which could potentially find their way in many
applications, including image alt text generation.Comment: Accepted as Student Abstract at 33rd AAAI Conference on Artificial
Intelligence, 201
Context-Aware Embeddings for Automatic Art Analysis
Automatic art analysis aims to classify and retrieve artistic representations
from a collection of images by using computer vision and machine learning
techniques. In this work, we propose to enhance visual representations from
neural networks with contextual artistic information. Whereas visual
representations are able to capture information about the content and the style
of an artwork, our proposed context-aware embeddings additionally encode
relationships between different artistic attributes, such as author, school, or
historical period. We design two different approaches for using context in
automatic art analysis. In the first one, contextual data is obtained through a
multi-task learning model, in which several attributes are trained together to
find visual relationships between elements. In the second approach, context is
obtained through an art-specific knowledge graph, which encodes relationships
between artistic attributes. An exhaustive evaluation of both of our models in
several art analysis problems, such as author identification, type
classification, or cross-modal retrieval, show that performance is improved by
up to 7.3% in art classification and 37.24% in retrieval when context-aware
embeddings are used
VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining
Assessing the aesthetics of an image is challenging, as it is influenced by
multiple factors including composition, color, style, and high-level semantics.
Existing image aesthetic assessment (IAA) methods primarily rely on
human-labeled rating scores, which oversimplify the visual aesthetic
information that humans perceive. Conversely, user comments offer more
comprehensive information and are a more natural way to express human opinions
and preferences regarding image aesthetics. In light of this, we propose
learning image aesthetics from user comments, and exploring vision-language
pretraining methods to learn multimodal aesthetic representations.
Specifically, we pretrain an image-text encoder-decoder model with
image-comment pairs, using contrastive and generative objectives to learn rich
and generic aesthetic semantics without human labels. To efficiently adapt the
pretrained model for downstream IAA tasks, we further propose a lightweight
rank-based adapter that employs text as an anchor to learn the aesthetic
ranking concept. Our results show that our pretrained aesthetic vision-language
model outperforms prior works on image aesthetic captioning over the
AVA-Captions dataset, and it has powerful zero-shot capability for aesthetic
tasks such as zero-shot style classification and zero-shot IAA, surpassing many
supervised baselines. With only minimal finetuning parameters using the
proposed adapter module, our model achieves state-of-the-art IAA performance
over the AVA dataset.Comment: CVPR 2023,
https://github.com/google-research/google-research/tree/master/vil
- …