58,895 research outputs found
Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks
Sentiment analysis of online user generated content is important for many
social media analytics tasks. Researchers have largely relied on textual
sentiment analysis to develop systems to predict political elections, measure
economic indicators, and so on. Recently, social media users are increasingly
using images and videos to express their opinions and share their experiences.
Sentiment analysis of such large scale visual content can help better extract
user sentiments toward events or topics, such as those in image tweets, so that
prediction of sentiment from visual content is complementary to textual
sentiment analysis. Motivated by the needs in leveraging large scale yet noisy
training data to solve the extremely challenging problem of image sentiment
analysis, we employ Convolutional Neural Networks (CNN). We first design a
suitable CNN architecture for image sentiment analysis. We obtain half a
million training samples by using a baseline sentiment algorithm to label
Flickr images. To make use of such noisy machine labeled data, we employ a
progressive strategy to fine-tune the deep network. Furthermore, we improve the
performance on Twitter images by inducing domain transfer with a small number
of manually labeled Twitter images. We have conducted extensive experiments on
manually labeled Twitter images. The results show that the proposed CNN can
achieve better performance in image sentiment analysis than competing
algorithms.Comment: 9 pages, 5 figures, AAAI 201
Evaluation datasets for Twitter sentiment analysis: a survey and a new dataset, the STS-Gold
Sentiment analysis over Twitter offers organisations and individuals a fast and effective way to monitor the publics' feelings towards them and their competitors. To assess the performance of sentiment analysis methods over Twitter a small set of evaluation datasets have been released in the last few years. In this paper we present an overview of eight publicly available and manually annotated evaluation datasets for Twitter sentiment analysis. Based on this review, we show that a common limitation of most of these datasets, when assessing sentiment analysis at target (entity) level, is the lack of distinctive sentiment annotations among the tweets and the entities contained in them. For example, the tweet "I love iPhone, but I hate iPad" can be annotated with a mixed sentiment label, but the entity iPhone within this tweet should be annotated with a positive sentiment label. Aiming to overcome this limitation, and to complement current evaluation datasets, we present STS-Gold, a new evaluation dataset where tweets and targets (entities) are annotated individually and therefore may present different sentiment labels. This paper also provides a comparative study of the various datasets along several dimensions including: total number of tweets, vocabulary size and sparsity. We also investigate the pair-wise correlation among these dimensions as well as their correlations to the sentiment classification performance on different datasets
The Royal Birth of 2013: Analysing and Visualising Public Sentiment in the UK Using Twitter
Analysis of information retrieved from microblogging services such as Twitter
can provide valuable insight into public sentiment in a geographic region. This
insight can be enriched by visualising information in its geographic context.
Two underlying approaches for sentiment analysis are dictionary-based and
machine learning. The former is popular for public sentiment analysis, and the
latter has found limited use for aggregating public sentiment from Twitter
data. The research presented in this paper aims to extend the machine learning
approach for aggregating public sentiment. To this end, a framework for
analysing and visualising public sentiment from a Twitter corpus is developed.
A dictionary-based approach and a machine learning approach are implemented
within the framework and compared using one UK case study, namely the royal
birth of 2013. The case study validates the feasibility of the framework for
analysis and rapid visualisation. One observation is that there is good
correlation between the results produced by the popular dictionary-based
approach and the machine learning approach when large volumes of tweets are
analysed. However, for rapid analysis to be possible faster methods need to be
developed using big data techniques and parallel methods.Comment: http://www.blessonv.com/research/publicsentiment/ 9 pages. Submitted
to IEEE BigData 2013: Workshop on Big Humanities, October 201
Analyzing Disproportionate Reaction via Comparative Multilingual Targeted Sentiment in Twitter
Global events such as terrorist attacks are commented upon in social media, such as Twitter, in different languages and from different parts of the world. Most prior studies have focused on monolingual sentiment analysis, and therefore excluded an extensive proportion of the Twitter userbase. In this paper, we perform a multilingual comparative sentiment analysis study on the terrorist attack in Paris, during November 2015. In particular, we look at targeted sentiment, investigating opinions on specific entities, not simply the general sentiment of each tweet. Given the potentially inflammatory and polarizing effect that these types of tweets may have on attitudes, we examine the sentiments expressed about different targets and explore whether disproportionate reaction was expressed about such targets across different languages. Specifically, we assess whether the sentiment for French speaking Twitter users during the Paris attack differs from English-speaking ones. We identify disproportionately negative attitudes in the English dataset over the French one towards some entities and, via a crowdsourcing experiment, illustrate that this also extends to forming an annotator bias
The Effects of Twitter Sentiment on Stock Price Returns
Social media are increasingly reflecting and influencing behavior of other
complex systems. In this paper we investigate the relations between a well-know
micro-blogging platform Twitter and financial markets. In particular, we
consider, in a period of 15 months, the Twitter volume and sentiment about the
30 stock companies that form the Dow Jones Industrial Average (DJIA) index. We
find a relatively low Pearson correlation and Granger causality between the
corresponding time series over the entire time period. However, we find a
significant dependence between the Twitter sentiment and abnormal returns
during the peaks of Twitter volume. This is valid not only for the expected
Twitter volume peaks (e.g., quarterly announcements), but also for peaks
corresponding to less obvious events. We formalize the procedure by adapting
the well-known "event study" from economics and finance to the analysis of
Twitter data. The procedure allows to automatically identify events as Twitter
volume peaks, to compute the prevailing sentiment (positive or negative)
expressed in tweets at these peaks, and finally to apply the "event study"
methodology to relate them to stock returns. We show that sentiment polarity of
Twitter peaks implies the direction of cumulative abnormal returns. The amount
of cumulative abnormal returns is relatively low (about 1-2%), but the
dependence is statistically significant for several days after the events
Sentiment analysis of health care tweets: review of the methods used.
BACKGROUND: Twitter is a microblogging service where users can send and read short 140-character messages called "tweets." There are several unstructured, free-text tweets relating to health care being shared on Twitter, which is becoming a popular area for health care research. Sentiment is a metric commonly used to investigate the positive or negative opinion within these messages. Exploring the methods used for sentiment analysis in Twitter health care research may allow us to better understand the options available for future research in this growing field. OBJECTIVE: The first objective of this study was to understand which tools would be available for sentiment analysis of Twitter health care research, by reviewing existing studies in this area and the methods they used. The second objective was to determine which method would work best in the health care settings, by analyzing how the methods were used to answer specific health care questions, their production, and how their accuracy was analyzed. METHODS: A review of the literature was conducted pertaining to Twitter and health care research, which used a quantitative method of sentiment analysis for the free-text messages (tweets). The study compared the types of tools used in each case and examined methods for tool production, tool training, and analysis of accuracy. RESULTS: A total of 12 papers studying the quantitative measurement of sentiment in the health care setting were found. More than half of these studies produced tools specifically for their research, 4 used open source tools available freely, and 2 used commercially available software. Moreover, 4 out of the 12 tools were trained using a smaller sample of the study's final data. The sentiment method was trained against, on an average, 0.45% (2816/627,024) of the total sample data. One of the 12 papers commented on the analysis of accuracy of the tool used. CONCLUSIONS: Multiple methods are used for sentiment analysis of tweets in the health care setting. These range from self-produced basic categorizations to more complex and expensive commercial software. The open source and commercial methods are developed on product reviews and generic social media messages. None of these methods have been extensively tested against a corpus of health care messages to check their accuracy. This study suggests that there is a need for an accurate and tested tool for sentiment analysis of tweets trained using a health care setting-specific corpus of manually annotated tweets first
- …