4,837 research outputs found
Adversarial Cheap Talk
Adversarial attacks in reinforcement learning (RL) often assume
highly-privileged access to the victim's parameters, environment, or data.
Instead, this paper proposes a novel adversarial setting called a Cheap Talk
MDP in which an Adversary can merely append deterministic messages to the
Victim's observation, resulting in a minimal range of influence. The Adversary
cannot occlude ground truth, influence underlying environment dynamics or
reward signals, introduce non-stationarity, add stochasticity, see the Victim's
actions, or access their parameters. Additionally, we present a simple
meta-learning algorithm called Adversarial Cheap Talk (ACT) to train
Adversaries in this setting. We demonstrate that an Adversary trained with ACT
can still significantly influence the Victim's training and testing
performance, despite the highly constrained setting. Affecting train-time
performance reveals a new attack vector and provides insight into the success
and failure modes of existing RL algorithms. More specifically, we show that an
ACT Adversary is capable of harming performance by interfering with the
learner's function approximation, or instead helping the Victim's performance
by outputting useful features. Finally, we show that an ACT Adversary can
manipulate messages during train-time to directly and arbitrarily control the
Victim at test-time
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis systems
recently achieve promising progress to take over tedious examinations by
cardiologists. However, their vulnerability to adversarial attacks still lack
comprehensive investigation. The existing attacks in image domain could not be
directly applicable due to the distinct properties of ECGs in visualization and
dynamic properties. Thus, this paper takes a step to thoroughly explore
adversarial attacks on the DNN-powered ECG diagnosis system. We analyze the
properties of ECGs to design effective attacks schemes under two attacks models
respectively. Our results demonstrate the blind spots of DNN-powered diagnosis
systems under adversarial attacks, which calls attention to adequate
countermeasures.Comment: Accepted by AAAI 202
Rational Multiparty Computation
The field of rational cryptography considers the design of cryptographic protocols in the presence of rational agents seeking to maximize local utility functions. This departs from the standard secure multiparty computation setting, where players are assumed to be either honest or malicious. ^ We detail the construction of both a two-party and a multiparty game theoretic framework for constructing rational cryptographic protocols. Our framework specifies the utility function assumptions necessary to realize the privacy, correctness, and fairness guarantees for protocols. We demonstrate that our framework correctly models cryptographic protocols, such as rational secret sharing, where existing work considers equilibrium concepts that yield unreasonable equilibria. Similarly, we demonstrate that cryptography may be applied to the game theoretic domain, constructing an auction market not realizable in the original formulation. Additionally, we demonstrate that modeling players as rational agents allows us to design a protocol that destabilizes coalitions. Thus, we establish a mutual benefit from combining the two fields, while demonstrating the applicability of our framework to real-world market environments.^ We also give an application of game theory to adversarial interactions where cryptography is not necessary. Specifically, we consider adversarial machine learning, where the adversary is rational and reacts to the presence of a data miner. We give a general extension to classification algorithms that returns greater expected utility for the data miner than existing classification methods
Adversarial Language Games for Advanced Natural Language Intelligence
We study the problem of adversarial language games, in which multiple agents
with conflicting goals compete with each other via natural language
interactions. While adversarial language games are ubiquitous in human
activities, little attention has been devoted to this field in natural language
processing. In this work, we propose a challenging adversarial language game
called Adversarial Taboo as an example, in which an attacker and a defender
compete around a target word. The attacker is tasked with inducing the defender
to utter the target word invisible to the defender, while the defender is
tasked with detecting the target word before being induced by the attacker. In
Adversarial Taboo, a successful attacker must hide its intention and subtly
induce the defender, while a competitive defender must be cautious with its
utterances and infer the intention of the attacker. Such language abilities can
facilitate many important downstream NLP tasks. To instantiate the game, we
create a game environment and a competition platform. Comprehensive experiments
and empirical studies on several baseline attack and defense strategies show
promising and interesting results. Based on the analysis on the game and
experiments, we discuss multiple promising directions for future research.Comment: Accepted by AAAI 202
InfoSwarms: Drone Swarms and Information Warfare
Drone swarms, which can be used at sea, on land, in the air, and even in space, are fundamentally information-dependent weapons. No study to date has examined drone swarms in the context of information warfare writ large. This article explores the dependence of these swarms on information and the resultant connections with areas of information warfare—electronic, cyber, space, and psychological—drawing on open-source research and qualitative reasoning. Overall, the article offers insights into how this important emerging technology fits into the broader defense ecosystem and outlines practical approaches to strengthening related information warfare capabilities
- …