9,111 research outputs found
Sophisticated Attacks on Decoy Ballots: The Devil's Menu and the Market for Lemons
Decoy ballots do not count in election outcomes, but otherwise they are
indistinguishable from real ballots. By means of a game-theoretical model, we
show that decoy ballots may not provide effective protection against a
malevolent adversary trying to buy real ballots. If the citizenry is divided
into subgroups (or districts), the adversary can construct a so-called "Devil's
Menu" consisting of several prices. In equilibrium, the adversary can buy the
real ballots of any strict subset of districts at a price corresponding to the
willingness to sell on the part of the citizens holding such ballots. By
contrast, decoy voters are trapped into selling their ballots at a low, or even
negligible, price. Blowing up the adversary's budget by introducing decoy
ballots may thus turn out to be futile. The Devil's Menu can also be applied to
the well-known "Lemons Problem"
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
Neural models enjoy widespread use across a variety of tasks and have grown
to become crucial components of many industrial systems. Despite their
effectiveness and extensive popularity, they are not without their exploitable
flaws. Initially applied to computer vision systems, the generation of
adversarial examples is a process in which seemingly imperceptible
perturbations are made to an image, with the purpose of inducing a deep
learning based classifier to misclassify the image. Due to recent trends in
speech processing, this has become a noticeable issue in speech recognition
models. In late 2017, an attack was shown to be quite effective against the
Speech Commands classification model. Limited-vocabulary speech classifiers,
such as the Speech Commands model, are used quite frequently in a variety of
applications, particularly in managing automated attendants in telephony
contexts. As such, adversarial examples produced by this attack could have
real-world consequences. While previous work in defending against these
adversarial examples has investigated using audio preprocessing to reduce or
distort adversarial noise, this work explores the idea of flooding particular
frequency bands of an audio signal with random noise in order to detect
adversarial examples. This technique of flooding, which does not require
retraining or modifying the model, is inspired by work done in computer vision
and builds on the idea that speech classifiers are relatively robust to natural
noise. A combined defense incorporating 5 different frequency bands for
flooding the signal with noise outperformed other existing defenses in the
audio space, detecting adversarial examples with 91.8% precision and 93.5%
recall.Comment: Orally presented at the 18th IEEE International Symposium on Signal
Processing and Information Technology (ISSPIT) in Louisville, Kentucky, USA,
December 2018. 5 pages, 2 figure
SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis
In this paper, we propose a novel approach, called SENATUS, for joint traffic
anomaly detection and root-cause analysis. Inspired from the concept of a
senate, the key idea of the proposed approach is divided into three stages:
election, voting and decision. At the election stage, a small number of
\nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{,
which are used} to represent approximately the total (usually huge) set of
traffic flows. In the voting stage, anomaly detection is applied on the senator
flows and the detected anomalies are correlated to identify the most possible
anomalous time bins. Finally in the decision stage, a machine learning
technique is applied to the senator flows of each anomalous time bin to find
the root cause of the anomalies. We evaluate SENATUS using traffic traces
collected from the Pan European network, GEANT, and compare against another
approach which detects anomalies using lossless compression of traffic
histograms. We show the effectiveness of SENATUS in diagnosing anomaly types:
network scans and DoS/DDoS attacks
Recommended from our members
STAND: Sanitization Tool for ANomaly Detection
The efficacy of Anomaly Detection (AD) sensors depends heavily on the quality of the data used to train them. Artificial or contrived training data may not provide a realistic view of the deployment environment. Most realistic data sets are dirty; that is, they contain a number of attacks or anomalous events. The size of these high-quality training data sets makes manual removal or labeling of attack data infeasible. As a result, sensors trained on this data can miss attacks and their variations. We propose extending the training phase of AD sensors (in a manner agnostic to the underlying AD algorithm) to include a sanitization phase. This phase generates multiple models conditioned on small slices of the training data. We use these "micro-models"Â to produce provisional labels for each training input, and we combine the micro-models in a voting scheme to determine which parts of the training data may represent attacks. Our results suggest that this phase automatically and significantly improves the quality of unlabeled training data by making it as "attack-free"Â and "regular"Â as possible in the absence of absolute ground truth. We also show how a collaborative approach that combines models from different networks or domains can further refine the sanitization process to thwart targeted training or mimicry attacks against a single site
- …