8,260 research outputs found
G-Tweet: tool allowing to detect incidents on Twitter
In this Paper we show how we have tackled the problem of bad opinions about a waste company services on Twitter [1]. We classified the tweets in two types, inci-dents: tweets that the company can fix, and opinions [2].
In the first phase our research consisted in development of computer algorithms capa-ble of detecting incidents published by citizens on Twitter, and in the second phase we managed the collected information to improve the service and control of citizen perception.Universidad de MĂĄlaga. Campus de excelencia Internacional AndalucĂa Tec
Identifying Purpose Behind Electoral Tweets
Tweets pertaining to a single event, such as a national election, can number
in the hundreds of millions. Automatically analyzing them is beneficial in many
downstream natural language applications such as question answering and
summarization. In this paper, we propose a new task: identifying the purpose
behind electoral tweets--why do people post election-oriented tweets? We show
that identifying purpose is correlated with the related phenomenon of sentiment
and emotion detection, but yet significantly different. Detecting purpose has a
number of applications including detecting the mood of the electorate,
estimating the popularity of policies, identifying key issues of contention,
and predicting the course of events. We create a large dataset of electoral
tweets and annotate a few thousand tweets for purpose. We develop a system that
automatically classifies electoral tweets as per their purpose, obtaining an
accuracy of 43.56% on an 11-class task and an accuracy of 73.91% on a 3-class
task (both accuracies well above the most-frequent-class baseline). Finally, we
show that resources developed for emotion detection are also helpful for
detecting purpose
Viewpoint Discovery and Understanding in Social Networks
The Web has evolved to a dominant platform where everyone has the opportunity
to express their opinions, to interact with other users, and to debate on
emerging events happening around the world. On the one hand, this has enabled
the presence of different viewpoints and opinions about a - usually
controversial - topic (like Brexit), but at the same time, it has led to
phenomena like media bias, echo chambers and filter bubbles, where users are
exposed to only one point of view on the same topic. Therefore, there is the
need for methods that are able to detect and explain the different viewpoints.
In this paper, we propose a graph partitioning method that exploits social
interactions to enable the discovery of different communities (representing
different viewpoints) discussing about a controversial topic in a social
network like Twitter. To explain the discovered viewpoints, we describe a
method, called Iterative Rank Difference (IRD), which allows detecting
descriptive terms that characterize the different viewpoints as well as
understanding how a specific term is related to a viewpoint (by detecting other
related descriptive terms). The results of an experimental evaluation showed
that our approach outperforms state-of-the-art methods on viewpoint discovery,
while a qualitative analysis of the proposed IRD method on three different
controversial topics showed that IRD provides comprehensive and deep
representations of the different viewpoints
$1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter
This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves
A Framework Based on Semantic Spaces and Glyphs for Social Sensing on Twitter
Abstract In this paper we present a framework aimed at detecting emotions and sentiments in a Twitter stream. The approach uses the well-founded Latent Semantic Analysis technique, which can be seen as a bio-insipred cognitive architecture, to induce a semantic space where tweets are mapped and analysed by soft sensors. The measurements of the soft sensors are then used by a visualisation module which exploits glyphs to graphically present them. The result is an interactive map which makes easy the exploration of reactions and opinions in the whole globe regarding tweets retrieved from specific queries
Detecting East Asian Prejudice on Social Media
The outbreak of COVID-19 has transformed societies across the world as
governments tackle the health, economic and social costs of the pandemic. It
has also raised concerns about the spread of hateful language and prejudice
online, especially hostility directed against East Asia. In this paper we
report on the creation of a classifier that detects and categorizes social
media posts from Twitter into four classes: Hostility against East Asia,
Criticism of East Asia, Meta-discussions of East Asian prejudice and a neutral
class. The classifier achieves an F1 score of 0.83 across all four classes. We
provide our final model (coded in Python), as well as a new 20,000 tweet
training dataset used to make the classifier, two analyses of hashtags
associated with East Asian prejudice and the annotation codebook. The
classifier can be implemented by other researchers, assisting with both online
content moderation processes and further research into the dynamics, prevalence
and impact of East Asian prejudice online during this global pandemic.Comment: 12 page
- âŚ