496 research outputs found
Empowering NGOs in Countering Online Hate Messages
Studies on online hate speech have mostly focused on the automated detection
of harmful messages. Little attention has been devoted so far to the
development of effective strategies to fight hate speech, in particular through
the creation of counter-messages. While existing manual scrutiny and
intervention strategies are time-consuming and not scalable, advances in
natural language processing have the potential to provide a systematic approach
to hatred management. In this paper, we introduce a novel ICT platform that NGO
operators can use to monitor and analyze social media data, along with a
counter-narrative suggestion tool. Our platform aims at increasing the
efficiency and effectiveness of operators' activities against islamophobia. We
test the platform with more than one hundred NGO operators in three countries
through qualitative and quantitative evaluation. Results show that NGOs favor
the platform solution with the suggestion tool, and that the time required to
produce counter-narratives significantly decreases.Comment: Preprint of the paper published in Online Social Networks and Media
Journal (OSNEM
Multi-class machine classification of suicide-related communication on Twitter
The World Wide Web, and online social networks in particular, have increased connectivity between people such that information can spread to millions of people in a matter of minutes. This form of online collective contagion has provided many benefits to society, such as providing reassurance and emergency management in the immediate aftermath of natural disasters. However, it also poses a potential risk to vulnerable Web users who receive this information and could subsequently come to harm. One example of this would be the spread of suicidal ideation in online social networks, about which concerns have been raised. In this paper we report the results of a number of machine classifiers built with the aim of classifying text relating to suicide on Twitter. The classifier distinguishes between the more worrying content, such as suicidal ideation, and other suicide-related topics such as reporting of a suicide, memorial, campaigning and support. It also aims to identify flippant references to suicide. We built a set of baseline classifiers using lexical, structural, emotive and psychological features extracted from Twitter posts. We then improved on the baseline classifiers by building an ensemble classifier using the Rotation Forest algorithm and a Maximum Probability voting classification decision method, based on the outcome of base classifiers. This achieved an F-measure of 0.728 overall (for 7 classes, including suicidal ideation) and 0.69 for the suicidal ideation class. We summarise the results by reflecting on the most significant predictive principle components of the suicidal ideation class to provide insight into the language used on Twitter to express suicidal ideation. Finally, we perform a 12-month case study of suicide-related posts where we further evaluate the classification approach - showing a sustained classification performance and providing anonymous insights into the trends and demographic profile of Twitter users posting content of this type
The strength of weak bots
Some fear that social bots, automated accounts on online social networks, propagate falsehoods that can harm public opinion formation and democratic decision-making. Empirical research, however, resulted in puzzling findings. On the one hand, the content emitted by bots tends to spread very quickly in the networks. On the other hand, it turned out that bots’ ability to contact human users tends to be very limited. Here we analyze an agent-based model of social influence in networks explaining this inconsistency. We show that bots may be successful in spreading falsehoods not despite their limited direct impact on human users, but because of this limitation. Our model suggests that bots with limited direct impact on humans may be more and not less effective in spreading their views in the social network, because their direct contacts keep exerting influence on users that the bot does not reach directly. Highly active and well-connected bots, in contrast, may have a strong impact on their direct contacts, but these contacts grow too dissimilar from their network neighbors to further spread the bot\u27s content. To demonstrate this effect, we included bots in Axelrod\u27s seminal model of the dissemination of cultures and conducted simulation experiments demonstrating the strength of weak bots. A series of sensitivity analyses show that the finding is robust, in particular when the model is tailored to the context of online social networks. We discuss implications for future empirical research and developers of approaches to detect bots and misinformatio
Dancing to the Partisan Beat: A First Analysis of Political Communication on TikTok
TikTok is a video-sharing social networking service, whose popularity is
increasing rapidly. It was the world's second-most downloaded app in 2019.
Although the platform is known for having users posting videos of themselves
dancing, lip-syncing, or showcasing other talents, user-videos expressing
political views have seen a recent spurt. This study aims to perform a primary
evaluation of political communication on TikTok. We collect a set of US
partisan Republican and Democratic videos to investigate how users communicated
with each other about political issues. With the help of computer vision,
natural language processing, and statistical tools, we illustrate that
political communication on TikTok is much more interactive in comparison to
other social media platforms, with users combining multiple information
channels to spread their messages. We show that political communication takes
place in the form of communication trees since users generate branches of
responses to existing content. In terms of user demographics, we find that
users belonging to both the US parties are young and behave similarly on the
platform. However, Republican users generated more political content and their
videos received more responses; on the other hand, Democratic users engaged
significantly more in cross-partisan discussions.Comment: Accepted as a full paper at the 12th International ACM Web Science
Conference (WebSci 2020). Please cite the WebSci version; Second version
includes corrected typo
- …