1,882 research outputs found
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
Visual multimedia have become an inseparable part of our digital social
lives, and they often capture moments tied with deep affections. Automated
visual sentiment analysis tools can provide a means of extracting the rich
feelings and latent dispositions embedded in these media. In this work, we
explore how Convolutional Neural Networks (CNNs), a now de facto computational
machine learning tool particularly in the area of Computer Vision, can be
specifically applied to the task of visual sentiment prediction. We accomplish
this through fine-tuning experiments using a state-of-the-art CNN and via
rigorous architecture analysis, we present several modifications that lead to
accuracy improvements over prior art on a dataset of images from a popular
social media platform. We additionally present visualizations of local patterns
that the network learned to associate with image sentiment for insight into how
visual positivity (or negativity) is perceived by the model.Comment: Accepted for publication in Image and Vision Computing. Models and
source code available at https://github.com/imatge-upc/sentiment-201
CentralNet: a Multilayer Approach for Multimodal Fusion
This paper proposes a novel multimodal fusion approach, aiming to produce
best possible decisions by integrating information coming from multiple media.
While most of the past multimodal approaches either work by projecting the
features of different modalities into the same space, or by coordinating the
representations of each modality through the use of constraints, our approach
borrows from both visions. More specifically, assuming each modality can be
processed by a separated deep convolutional network, allowing to take decisions
independently from each modality, we introduce a central network linking the
modality specific networks. This central network not only provides a common
feature embedding but also regularizes the modality specific networks through
the use of multi-task learning. The proposed approach is validated on 4
different computer vision tasks on which it consistently improves the accuracy
of existing multimodal fusion approaches
Using Word Embeddings in Twitter Election Classification
Word embeddings and convolutional neural networks (CNN)
have attracted extensive attention in various classification
tasks for Twitter, e.g. sentiment classification. However,
the effect of the configuration used to train and generate
the word embeddings on the classification performance has
not been studied in the existing literature. In this paper,
using a Twitter election classification task that aims to detect
election-related tweets, we investigate the impact of
the background dataset used to train the embedding models,
the context window size and the dimensionality of word
embeddings on the classification performance. By comparing
the classification results of two word embedding models,
which are trained using different background corpora
(e.g. Wikipedia articles and Twitter microposts), we show
that the background data type should align with the Twitter
classification dataset to achieve a better performance. Moreover,
by evaluating the results of word embeddings models
trained using various context window sizes and dimensionalities,
we found that large context window and dimension
sizes are preferable to improve the performance. Our experimental
results also show that using word embeddings and
CNN leads to statistically significant improvements over various
baselines such as random, SVM with TF-IDF and SVM
with word embeddings
- …