2,183 research outputs found
POISED: Spotting Twitter Spam Off the Beaten Paths
Cybercriminals have found in online social networks a propitious medium to
spread spam and malicious content. Existing techniques for detecting spam
include predicting the trustworthiness of accounts and analyzing the content of
these messages. However, advanced attackers can still successfully evade these
defenses.
Online social networks bring people who have personal connections or share
common interests to form communities. In this paper, we first show that users
within a networked community share some topics of interest. Moreover, content
shared on these social network tend to propagate according to the interests of
people. Dissemination paths may emerge where some communities post similar
messages, based on the interests of those communities. Spam and other malicious
content, on the other hand, follow different spreading patterns.
In this paper, we follow this insight and present POISED, a system that
leverages the differences in propagation between benign and malicious messages
on social networks to identify spam and other unwanted content. We test our
system on a dataset of 1.3M tweets collected from 64K users, and we show that
our approach is effective in detecting malicious messages, reaching 91%
precision and 93% recall. We also show that POISED's detection is more
comprehensive than previous systems, by comparing it to three state-of-the-art
spam detection systems that have been proposed by the research community in the
past. POISED significantly outperforms each of these systems. Moreover, through
simulations, we show how POISED is effective in the early detection of spam
messages and how it is resilient against two well-known adversarial machine
learning attacks
Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter
Over the past few years, online bullying and aggression have become
increasingly prominent, and manifested in many different forms on social media.
However, there is little work analyzing the characteristics of abusive users
and what distinguishes them from typical social media users. In this paper, we
start addressing this gap by analyzing tweets containing a great large amount
of abusiveness. We focus on a Twitter dataset revolving around the Gamergate
controversy, which led to many incidents of cyberbullying and cyberaggression
on various gaming and social media platforms. We study the properties of the
users tweeting about Gamergate, the content they post, and the differences in
their behavior compared to typical Twitter users.
We find that while their tweets are often seemingly about aggressive and
hateful subjects, "Gamergaters" do not exhibit common expressions of online
anger, and in fact primarily differ from typical users in that their tweets are
less joyful. They are also more engaged than typical Twitter users, which is an
indication as to how and why this controversy is still ongoing. Surprisingly,
we find that Gamergaters are less likely to be suspended by Twitter, thus we
analyze their properties to identify differences from typical users and what
may have led to their suspension. We perform an unsupervised machine learning
analysis to detect clusters of users who, though currently active, could be
considered for suspension since they exhibit similar behaviors with suspended
users. Finally, we confirm the usefulness of our analyzed features by emulating
the Twitter suspension mechanism with a supervised learning method, achieving
very good precision and recall.Comment: In 28th ACM Conference on Hypertext and Social Media (ACM HyperText
2017
Solutions to Detect and Analyze Online Radicalization : A Survey
Online Radicalization (also called Cyber-Terrorism or Extremism or
Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing
concern to the society, governments and law enforcement agencies around the
world. Research shows that various platforms on the Internet (low barrier to
publish content, allows anonymity, provides exposure to millions of users and a
potential of a very quick and widespread diffusion of message) such as YouTube
(a popular video sharing website), Twitter (an online micro-blogging service),
Facebook (a popular social networking website), online discussion forums and
blogosphere are being misused for malicious intent. Such platforms are being
used to form hate groups, racist communities, spread extremist agenda, incite
anger or violence, promote radicalization, recruit members and create virtual
organi- zations and communities. Automatic detection of online radicalization
is a technically challenging problem because of the vast amount of the data,
unstructured and noisy user-generated content, dynamically changing content and
adversary behavior. There are several solutions proposed in the literature
aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we
review solutions to detect and analyze online radicalization. We review 40
papers published at 12 venues from June 2003 to November 2011. We present a
novel classification scheme to classify these papers. We analyze these
techniques, perform trend analysis, discuss limitations of existing techniques
and find out research gaps
Misinformation Detection in Social Media
abstract: The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information. As witnessed in recent incidents of misinformation, it escalates quickly and can impact social media users with undesirable consequences and wreak havoc instantaneously. Different from some existing research in psychology and social sciences about misinformation, social media platforms pose unprecedented challenges for misinformation detection. First, intentional spreaders of misinformation will actively disguise themselves. Second, content of misinformation may be manipulated to avoid being detected, while abundant contextual information may play a vital role in detecting it. Third, not only accuracy, earliness of a detection method is also important in containing misinformation from being viral. Fourth, social media platforms have been used as a fundamental data source for various disciplines, and these research may have been conducted in the presence of misinformation. To tackle the challenges, we focus on developing machine learning algorithms that are robust to adversarial manipulation and data scarcity.
The main objective of this dissertation is to provide a systematic study of misinformation detection in social media. To tackle the challenges of adversarial attacks, I propose adaptive detection algorithms to deal with the active manipulations of misinformation spreaders via content and networks. To facilitate content-based approaches, I analyze the contextual data of misinformation and propose to incorporate the specific contextual patterns of misinformation into a principled detection framework. Considering its rapidly growing nature, I study how misinformation can be detected at an early stage. In particular, I focus on the challenge of data scarcity and propose a novel framework to enable historical data to be utilized for emerging incidents that are seemingly irrelevant. With misinformation being viral, applications that rely on social media data face the challenge of corrupted data. To this end, I present robust statistical relational learning and personalization algorithms to minimize the negative effect of misinformation.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Multi-Modal Embeddings for Isolating Cross-Platform Coordinated Information Campaigns on Social Media
Coordinated multi-platform information operations are implemented in a
variety of contexts on social media, including state-run disinformation
campaigns, marketing strategies, and social activism. Characterized by the
promotion of messages via multi-platform coordination, in which multiple user
accounts, within a short time, post content advancing a shared informational
agenda on multiple platforms, they contribute to an already confusing and
manipulated information ecosystem. To make things worse, reliable datasets that
contain ground truth information about such operations are virtually
nonexistent. This paper presents a multi-modal approach that identifies the
social media messages potentially engaged in a coordinated information campaign
across multiple platforms. Our approach incorporates textual content, temporal
information and the underlying network of user and messages posted to identify
groups of messages with unusual coordination patterns across multiple social
media platforms. We apply our approach to content posted on four platforms
related to the Syrian Civil Defence organization known as the White Helmets:
Twitter, Facebook, Reddit, and YouTube. Results show that our approach
identifies social media posts that link to news YouTube channels with similar
factuality score, which is often an indication of coordinated operations.Comment: To appear in the 5th Multidisciplinary International Symposium on
Disinformation in Open Online Media (MISDOOM 2023
- …