4,668 research outputs found

    Adversarial behaviours knowledge area

    Full text link
    The technological advancements witnessed by our society in recent decades have brought improvements in our quality of life, but they have also created a number of opportunities for attackers to cause harm. Before the Internet revolution, most crime and malicious activity generally required a victim and a perpetrator to come into physical contact, and this limited the reach that malicious parties had. Technology has removed the need for physical contact to perform many types of crime, and now attackers can reach victims anywhere in the world, as long as they are connected to the Internet. This has revolutionised the characteristics of crime and warfare, allowing operations that would not have been possible before. In this document, we provide an overview of the malicious operations that are happening on the Internet today. We first provide a taxonomy of malicious activities based on the attacker’s motivations and capabilities, and then move on to the technological and human elements that adversaries require to run a successful operation. We then discuss a number of frameworks that have been proposed to model malicious operations. Since adversarial behaviours are not a purely technical topic, we draw from research in a number of fields (computer science, criminology, war studies). While doing this, we discuss how these frameworks can be used by researchers and practitioners to develop effective mitigations against malicious online operations.Published versio

    Adversarial Machine Learning in Network Intrusion Detection Systems

    Full text link
    Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.Comment: 25 pages, 6 figures, 4 table

    Efficient Energy Distribution in a Smart Grid using Multi-Player Games

    Full text link
    Algorithms and models based on game theory have nowadays become prominent techniques for the design of digital controllers for critical systems. Indeed, such techniques enable automatic synthesis: given a model of the environment and a property that the controller must enforce, those techniques automatically produce a correct controller, when it exists. In the present paper, we consider a class of concurrent, weighted, multi-player games that are well-suited to model and study the interactions of several agents who are competing for some measurable resources like energy. We prove that a subclass of those games always admit a Nash equilibrium, i.e. a situation in which all players play in such a way that they have no incentive to deviate. Moreover, the strategies yielding those Nash equilibria have a special structure: when one of the agents deviate from the equilibrium, all the others form a coalition that will enforce a retaliation mechanism that punishes the deviant agent. We apply those results to a real-life case study in which several smart houses that produce their own energy with solar panels, and can share this energy among them in micro-grid, must distribute the use of this energy along the day in order to avoid consuming electricity that must be bought from the global grid. We demonstrate that our theory allows one to synthesise an efficient controller for these houses: using penalties to be paid in the utility bill as an incentive, we force the houses to follow a pre-computed schedule that maximises the proportion of the locally produced energy that is consumed.Comment: In Proceedings Cassting'16/SynCoP'16, arXiv:1608.0017

    Ensuring the resilience of wireless sensor networks to malicious data injections through measurements inspection

    Get PDF
    Malicious data injections pose a severe threat to the systems based on \emph{Wireless Sensor Networks} (WSNs) since they give the attacker control over the measurements, and on the system's status and response in turn. Malicious measurements are particularly threatening when used to spoof or mask events of interest, thus eliciting or preventing desirable responses. Spoofing and masking attacks are particularly difficult to detect since they depict plausible behaviours, especially if multiple sensors have been compromised and \emph{collude} to inject a coherent set of malicious measurements. Previous work has tackled the problem through \emph{measurements inspection}, which analyses the inter-measurements correlations induced by the physical phenomena. However, these techniques consider simplistic attacks and are not robust to collusion. Moreover, they assume highly predictable patterns in the measurements distribution, which are invalidated by the unpredictability of events. We design a set of techniques that effectively \emph{detect} malicious data injections in the presence of sophisticated collusion strategies, when one or more events manifest. Moreover, we build a methodology to \emph{characterise} the likely compromised sensors. We also design \emph{diagnosis} criteria that allow us to distinguish anomalies arising from malicious interference and faults. In contrast with previous work, we test the robustness of our methodology with automated and sophisticated attacks, where the attacker aims to evade detection. We conclude that our approach outperforms state-of-the-art approaches. Moreover, we estimate quantitatively the WSN degree of resilience and provide a methodology to give a WSN owner an assured degree of resilience by automatically designing the WSN deployment. To deal also with the extreme scenario where the attacker has compromised most of the WSN, we propose a combination with \emph{software attestation techniques}, which are more reliable when malicious data is originated by a compromised software, but also more expensive, and achieve an excellent trade-off between cost and resilience.Open Acces

    Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

    Full text link
    As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics. Parallel to that, adversarial machine learning has risen recently with an impressive and significant attention, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task. We show that quantization does not offer any robust protection, results in severe form of gradient masking and advance some hypotheses to explain it. However, we experimentally observe poor transferability capacities which we explain by quantization value shift phenomenon and gradient misalignment and explore how these results can be exploited with an ensemble-based defense

    Explaining Explanations in AI

    Get PDF
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly

    Characterological formulations of persons in neighbourhood complaint sequences

    Get PDF
    This article shows how speakers mobilise characterological formulations of others and, particularly, ‘types’ of persons, in social action. We extend previous work in discursive psychology, in which notions of self or others’ identity have been well-studied as categorial practices, by focusing specifically on the occasioned use of “[descriptor] person” formulations which index the characteristics of people. Drawing on a British corpus of 315 telephone calls about neighbour problems (e.g., noise, verbal abuse) to environmental health and mediation services, we show that callers build in-situ descriptions of self and neighbour for the practical activity of complaining or defending against accusations - as types of people that are, for instance, reasonable (e.g., “I’m an extremely tolerant person”) in contrast to their neighbours’ shortcomings (e.g., “He’s a rather obnoxious person”). Our findings demonstrate that psychological predicates of self and other, indexed through characterological formulations, are recipient designed (i.e., formulated to display an orientation to co-present others) in ways that shape the institutional relevance for service provision. We conclude that, like many other aspects of the psychological thesaurus, ‘character types’ are not just the preserve of psychologists, but a routine resource for ordinary social interaction
    • 

    corecore