26 research outputs found
Towards Debugging and Improving Adversarial Robustness Evaluations
Despite exhibiting unprecedented success in many application domains, machine‐learning models have been shown to be vulnerable to adversarial examples, i.e., maliciously perturbed inputs that are able to subvert their predictions at test time. Rigorous testing against such perturbations requires enumerating all possible outputs for all possible inputs, and despite impressive results in this field, these methods remain still difficult to scale to modern deep learning systems. For these reasons, empirical methods are often used. These adversarial perturbations are optimized via gradient descent, minimizing a loss function that aims to increase the probability of misleading the model’s predictions. To understand the sensitivity of the model to such attacks, and to counter the effects, machine-learning model designers craft worst-case adversarial perturbations and test them against the model they are evaluating. However, many of the proposed defenses have been shown to provide a false sense of security due to failures of the attacks, rather than actual improvements in the machine‐learning models’ robustness. They have been broken indeed under more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic and automated manner. To this end, we tackle three different challenges: (1) we investigate how adversarial robustness evaluations can be performed efficiently, by proposing a novel attack that can be used to find minimum-norm adversarial perturbations; (2) we propose a framework for debugging adversarial robustness evaluations, by defining metrics that reveal faulty evaluations as well as mitigations to patch the detected problems; and (3) we show how to employ a surrogate model for improving the success of transfer-based attacks, that are useful when gradient-based attacks are failing due to problems in the gradient information. To improve the quality of robustness evaluations, we propose a novel attack, referred to as Fast Minimum‐Norm (FMN) attack, which competes with state‐of‐the‐art attacks in terms of quality of the solution while outperforming them in terms of computational complexity and robustness to sub‐optimal configurations of the attack hyperparameters. These are all desirable characteristics of attacks used in robustness evaluations, as the aforementioned problems often arise from the use of sub‐optimal attack hyperparameters, including, e.g., the number of attack iterations, the step size, and the use of an inappropriate loss function. The correct refinement of these variables is often neglected, hence we designed a novel framework that helps debug the optimization process of adversarial examples, by means of quantitative indicators that unveil common problems and failures during the attack optimization process, e.g., in the configuration of the hyperparameters. Commonly accepted best practices suggest further validating the target model with alternative strategies, among which is the usage of a surrogate model to craft the adversarial examples to transfer to the model being evaluated is useful to check for gradient obfuscation. However, how to effectively create transferable adversarial examples is not an easy process, as many factors influence the success of this strategy. In the context of this research, we utilize a first-order model to show what are the main underlying phenomena that affect transferability and suggest best practices to create adversarial examples that transfer well to the target models.
Explaining Machine Learning DGA Detectors from DNS Traffic Data
One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker. This attack is made by leveraging the Domain Name System (DNS) technology through Domain Generation Algorithms (DGAs), a stealthy connection strategy that yet leaves suspicious data patterns. To detect such threats, advances in their analysis have been made. For the majority, they found Machine Learning (ML) as a solution, which can be highly effective in analyzing and classifying massive amounts of data. Although strongly performing, ML models have a certain degree of obscurity in their decision-making process. To cope with this problem, a branch of ML known as Explainable ML tries to break down the black-box nature of classifiers and make them interpretable and human-readable. This work addresses the problem of Explainable ML in the context of botnet and DGA detection, which at the best of our knowledge, is the first to concretely break down the decisions of ML classifiers when devised for botnet/DGA detection, therefore providing global and local explanations
Robust Machine Learning for Malware Detection over Time
The presence and persistence of Android malware is an on-going threat that plagues this information era, and machine learning technologies are now extensively used to deploy more effective detectors that can block the majority of these malicious programs. However, these algorithms have not been developed to pursue the natural evolution of malware, and their performances significantly degrade over time because of such concept-drift.
Currently, state-of-the-art techniques only focus on detecting the presence of such drift, or they address it by relying on frequent updates of models. Hence, there is a lack of knowledge regarding the cause of the concept drift, and ad-hoc solutions that can counter the passing of time are still underinvestigated.
In this work, we commence to address these issues as we propose (i) a drift-analysis framework to identify which characteristics of data are causing the drift, and (ii) SVM-CB, a time-aware classifier that leverages the drift-analysis information to slow down the performance drop. We highlight the efficacy of our contribution by comparing its degradation over time with a state-of-the-art classifier, and we show that SVM-CB better withstand the distribution changes that naturally characterizes the malware domain.
We conclude by discussing the limitations of our approach and how our contribution can be taken as a first step towards more time-resistant classifiers that not only tackle, but also understand the concept drift that affect data
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Transferability captures the ability of an attack against a machine-learning
model to be effective against a different, potentially unknown, model.
Empirical evidence for transferability has been shown in previous work, but the
underlying reasons why an attack transfers or not are not yet well understood.
In this paper, we present a comprehensive analysis aimed to investigate the
transferability of both test-time evasion and training-time poisoning attacks.
We provide a unifying optimization framework for evasion and poisoning attacks,
and a formal definition of transferability of such attacks. We highlight two
main factors contributing to attack transferability: the intrinsic adversarial
vulnerability of the target model, and the complexity of the surrogate model
used to optimize the attack. Based on these insights, we define three metrics
that impact an attack's transferability. Interestingly, our results derived
from theoretical analysis hold for both evasion and poisoning attacks, and are
confirmed experimentally using a wide range of linear and non-linear
classifiers and datasets
Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates
Machine-learning models demand for periodic updates to improve their average
accuracy, exploiting novel architectures and additional data. However, a
newly-updated model may commit mistakes that the previous model did not make.
Such misclassifications are referred to as negative flips, and experienced by
users as a regression of performance. In this work, we show that this problem
also affects robustness to adversarial examples, thereby hindering the
development of secure model update practices. In particular, when updating a
model to improve its adversarial robustness, some previously-ineffective
adversarial examples may become misclassified, causing a regression in the
perceived security of the system. We propose a novel technique, named
robustness-congruent adversarial training, to address this issue. It amounts to
fine-tuning a model with adversarial training, while constraining it to retain
higher robustness on the adversarial examples that were correctly classified
before the update. We show that our algorithm and, more generally, learning
with non-regression constraints, provides a theoretically-grounded framework to
train consistent estimators. Our experiments on robust models for computer
vision confirm that (i) both accuracy and robustness, even if improved after
model update, can be affected by negative flips, and (ii) our
robustness-congruent adversarial training can mitigate the problem,
outperforming competing baseline methods
Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Evaluating the adversarial robustness of machine-learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss function, the optimizer, and the step-size scheduler, along with the corresponding hyperparameters. Our extensive evaluation involving several robust models demonstrates the improved efficacy of fast minimum-norm attacks when hyped up with hyperparameter optimization. We release our open-source code at https://github.com/pralab/HO-FMN
Detecting Attacks Against Deep Reinforcement Learning for Autonomous Driving
With the advent of deep reinforcement learning, we witness the spread of novel autonomous driving agents that learn how to drive safely among humans. However, skilled attackers might steer the decision-making process of these agents through minimal perturbations applied to the readings of their hardware sensors. These force the behavior of the victim agent to change unexpectedly, increasing the likelihood of crashes by inhibiting its braking capability or coercing it into constantly changing lanes. To counter these phenomena, we propose a detector that can be mounted on autonomous driving cars to spot the presence of ongoing attacks. The detector first profiles the agent's behavior without attacks by looking at the representation learned during training. Once deployed, the detector discards all the decisions that deviate from the regular driving pattern. We empirically highlight the detection capabilities of our work by testing it against unseen attacks deployed with increasing strength
Stateful Detection of Adversarial Reprogramming
Adversarial reprogramming allows stealing computational resources by
repurposing machine learning models to perform a different task chosen by the
attacker. For example, a model trained to recognize images of animals can be
reprogrammed to recognize medical images by embedding an adversarial program in
the images provided as inputs. This attack can be perpetrated even if the
target model is a black box, supposed that the machine-learning model is
provided as a service and the attacker can query the model and collect its
outputs. So far, no defense has been demonstrated effective in this scenario.
We show for the first time that this attack is detectable using stateful
defenses, which store the queries made to the classifier and detect the
abnormal cases in which they are similar. Once a malicious query is detected,
the account of the user who made it can be blocked. Thus, the attacker must
create many accounts to perpetrate the attack. To decrease this number, the
attacker could create the adversarial program against a surrogate classifier
and then fine-tune it by making few queries to the target model. In this
scenario, the effectiveness of the stateful defense is reduced, but we show
that it is still effective
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference
Adversarial reprogramming allows repurposing a machine-learning model to
perform a different task. For example, a model trained to recognize animals can
be reprogrammed to recognize digits by embedding an adversarial program in the
digit images provided as input. Recent work has shown that adversarial
reprogramming may not only be used to abuse machine-learning models provided as
a service, but also beneficially, to improve transfer learning when training
data is scarce. However, the factors affecting its success are still largely
unexplained. In this work, we develop a first-order linear model of adversarial
reprogramming to show that its success inherently depends on the size of the
average input gradient, which grows when input gradients are more aligned, and
when inputs have higher dimensionality. The results of our experimental
analysis, involving fourteen distinct reprogramming tasks, show that the above
factors are correlated with the success and the failure of adversarial
reprogramming