15,440 research outputs found
Politics of Adversarial Machine Learning
In addition to their security properties, adversarial machine-learning
attacks and defenses have political dimensions. They enable or foreclose
certain options for both the subjects of the machine learning systems and for
those who deploy them, creating risks for civil liberties and human rights. In
this paper, we draw on insights from science and technology studies,
anthropology, and human rights literature, to inform how defenses against
adversarial attacks can be used to suppress dissent and limit attempts to
investigate machine learning systems. To make this concrete, we use real-world
examples of how attacks such as perturbation, model inversion, or membership
inference can be used for socially desirable ends. Although the predictions of
this analysis may seem dire, there is hope. Efforts to address human rights
concerns in the commercial spyware industry provide guidance for similar
measures to ensure ML systems serve democratic, not authoritarian endsComment: Authors ordered alphabetically; 4 page
Ethical Challenges in Data-Driven Dialogue Systems
The use of dialogue systems as a medium for human-machine interaction is an
increasingly prevalent paradigm. A growing number of dialogue systems use
conversation strategies that are learned from large datasets. There are well
documented instances where interactions with these system have resulted in
biased or even offensive conversations due to the data-driven training process.
Here, we highlight potential ethical issues that arise in dialogue systems
research, including: implicit biases in data-driven systems, the rise of
adversarial examples, potential sources of privacy violations, safety concerns,
special considerations for reinforcement learning systems, and reproducibility
concerns. We also suggest areas stemming from these issues that deserve further
investigation. Through this initial survey, we hope to spur research leading to
robust, safe, and ethically sound dialogue systems.Comment: In Submission to the AAAI/ACM conference on Artificial Intelligence,
Ethics, and Societ
Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections
Adversarial interactions against politicians on social media such as Twitter
have significant impact on society. In particular they disrupt substantive
political discussions online, and may discourage people from seeking public
office. In this study, we measure the adversarial interactions against
candidates for the US House of Representatives during the run-up to the 2018 US
general election. We gather a new dataset consisting of 1.7 million tweets
involving candidates, one of the largest corpora focusing on political
discourse. We then develop a new technique for detecting tweets with toxic
content that are directed at any specific candidate.Such technique allows us to
more accurately quantify adversarial interactions towards political candidates.
Further, we introduce an algorithm to induce candidate-specific adversarial
terms to capture more nuanced adversarial interactions that previous techniques
may not consider toxic. Finally, we use these techniques to outline the breadth
of adversarial interactions seen in the election, including offensive
name-calling, threats of violence, posting discrediting information, attacks on
identity, and adversarial message repetition
- …