2 research outputs found
Subtle Censorship via Adversarial Fakeness in Kyrgyzstan
With the shift of public discourse to social media, we see simultaneously an
expansion of civic engagement as the bar to enter the conversation is lowered,
and the reaction by both state and non-state adversaries of free speech to
silence these voices. Traditional forms of censorship struggle in this new
situation to enforce the preferred narrative of those in power. Consequently,
they have developed new methods for controlling the conversation that use the
social media platform itself.
Using the Central Asian republic of Kyrgyzstan as a main case study, this
talk explores how this new form of "subtle" censorship relies on pretence and
imitation, and why interdisciplinary methods of research are needed to grapple
with it. We examine how "fakeness" in the form of fake news and profiles is
used as methods of subtle censorship.Comment: Accepted HotPETs talk, 201
Thinking Taxonomically about Fake Accounts: Classification, False Dichotomies, and the Need for Nuance
It is often said that war creates a fog in which it becomes difficult to
discern friend from foe on the battlefield. In the ongoing war on fake
accounts, conscious development of taxonomies of the phenomenon has yet to
occur, resulting in much confusion on the digital battlefield about what
exactly a fake account is. This paper intends to address this problem, not by
proposing a taxonomy of fake accounts, but by proposing a systematic way to
think taxonomically about the phenomenon. Specifically, we examine fake
accounts through both a combined philosophical and computer science-based
perspective. Through these lenses, we deconstruct narrow binary thinking about
fake accounts, both in the form of general false dichotomies and specifically
in relation to the Facebook's conceptual framework "Coordinated Inauthentic
Behavior" (CIB). We then address the false dichotomies by constructing a more
complex way of thinking taxonomically about fake accounts