858,362 research outputs found
Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks
Sentiment analysis of online user generated content is important for many
social media analytics tasks. Researchers have largely relied on textual
sentiment analysis to develop systems to predict political elections, measure
economic indicators, and so on. Recently, social media users are increasingly
using images and videos to express their opinions and share their experiences.
Sentiment analysis of such large scale visual content can help better extract
user sentiments toward events or topics, such as those in image tweets, so that
prediction of sentiment from visual content is complementary to textual
sentiment analysis. Motivated by the needs in leveraging large scale yet noisy
training data to solve the extremely challenging problem of image sentiment
analysis, we employ Convolutional Neural Networks (CNN). We first design a
suitable CNN architecture for image sentiment analysis. We obtain half a
million training samples by using a baseline sentiment algorithm to label
Flickr images. To make use of such noisy machine labeled data, we employ a
progressive strategy to fine-tune the deep network. Furthermore, we improve the
performance on Twitter images by inducing domain transfer with a small number
of manually labeled Twitter images. We have conducted extensive experiments on
manually labeled Twitter images. The results show that the proposed CNN can
achieve better performance in image sentiment analysis than competing
algorithms.Comment: 9 pages, 5 figures, AAAI 201
A comparative study of Bayesian models for unsupervised sentiment detection
This paper presents a comparative study of three closely related Bayesian models for unsupervised document level sentiment classification, namely, the latent sentiment model (LSM), the joint sentimenttopic (JST) model, and the Reverse-JST model. Extensive experiments have been conducted on two corpora, the movie review dataset and the multi-domain sentiment dataset. It has been found that while all the three models achieve either better or comparable performance on these two corpora when compared to the existing unsupervised sentiment classification approaches, both JST and Reverse-JST are able to extract sentiment-oriented topics. In addition, Reverse-JST always performs worse than JST suggesting that the JST model is more appropriate for joint sentiment topic detection
Sentiment Lexicon Adaptation with Context and Semantics for the Social Web
Sentiment analysis over social streams offers governments and organisations a fast and effective way to monitor the publics' feelings towards policies, brands, business, etc. General purpose sentiment lexicons have been used to compute sentiment from social streams, since they are simple and effective. They calculate the overall sentiment of texts by using a general collection of words, with predetermined sentiment orientation and strength. However, words' sentiment often vary with the contexts in which they appear, and new words might be encountered that are not covered by the lexicon, particularly in social media environments where content emerges and changes rapidly and constantly. In this paper, we propose a lexicon adaptation approach that uses contextual as well as semantic information extracted from DBPedia to update the words' weighted sentiment orientations and to add new words to the lexicon. We evaluate our approach on three different Twitter datasets, and show that enriching the lexicon with contextual and semantic information improves sentiment computation by 3.4% in average accuracy, and by 2.8% in average F1 measure
Bilingual Sentiment Embeddings: Joint Projection of Sentiment Across Languages
Sentiment analysis in low-resource languages suffers from a lack of annotated
corpora to estimate high-performing models. Machine translation and bilingual
word embeddings provide some relief through cross-lingual sentiment approaches.
However, they either require large amounts of parallel data or do not
sufficiently capture sentiment information. We introduce Bilingual Sentiment
Embeddings (BLSE), which jointly represent sentiment information in a source
and target language. This model only requires a small bilingual lexicon, a
source-language corpus annotated for sentiment, and monolingual word embeddings
for each language. We perform experiments on three language combinations
(Spanish, Catalan, Basque) for sentence-level cross-lingual sentiment
classification and find that our model significantly outperforms
state-of-the-art methods on four out of six experimental setups, as well as
capturing complementary information to machine translation. Our analysis of the
resulting embedding space provides evidence that it represents sentiment
information in the resource-poor target language without any annotated data in
that language.Comment: Accepted to ACL 2018 (Long Papers
Semi-supervised latent variable models for sentence-level sentiment analysis
We derive two variants of a semi-supervised model for fine-grained sentiment analysis. Both models leverage abundant natural supervision in the form of review ratings, as well as a small amount of manually crafted sentence labels, to learn sentence-level sentiment classifiers. The proposed model is a fusion of a fully supervised structured conditional model and its partially supervised counterpart. This allows for highly efficient estimation and inference algorithms with rich feature definitions. We describe the two variants as well as their component models and verify experimentally that both variants give significantly improved results for sentence-level sentiment analysis compared to all baselines
- …
