71 research outputs found

    Evaluating Adversarial Robustness of Detection-based Defenses against Adversarial Examples

    Get PDF
    Machine Learning algorithms provide astonishing performance in a wide range of tasks, including sensitive and critical applications. On the other hand, it has been shown that they are vulnerable to adversarial attacks, a set of techniques that violate the integrity, confidentiality, or availability of such systems. In particular, one of the most studied phenomena concerns adversarial examples, i.e., input samples that are carefully manipulated to alter the model output. In the last decade, the research community put a strong effort into this field, proposing new evasion attacks and methods to defend against them. With this thesis, we propose different approaches that can be applied to Deep Neural Networks to detect and reject adversarial examples that present an anomalous distribution with respect to training data. The first leverages the domain knowledge of the relationships among the considered classes integrated through a framework in which first-order logic knowledge is converted into constraints and injected into a semi-supervised learning problem. Within this setting, the classifier is able to reject samples that violate the domain knowledge constraints. This approach can be applied in both single and multi-label classification settings. The second one is based on a Deep Neural Rejection (DNR) mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. To this end, we exploit RBF SVM classifiers, which provide decreasing confidence values as samples move away from the training data distribution. Despite technical differences, this approach shares a common backbone structure with other proposed methods that we formalize in a unifying framework. As all of them require comparing input samples against an oversized number of reference prototypes, possibly at different representation layers, they suffer from the same drawback, i.e., high computational overhead and memory usage, that makes these approaches unusable in real applications. To overcome this limitation, we introduce FADER (Fast Adversarial Example Rejection), a technique for speeding up detection-based methods by employing RBF networks as detectors: by fixing the number of required prototypes, their runtime complexity can be controlled. All proposed methods are evaluated in both black-box and white-box settings, i.e., against an attacker unaware of the defense mechanism, and against an attacker who knows the defense and adapts the attack algorithm to bypass it, respectively. Our experimental evaluation shows that the proposed methods increase the robustness of the defended models and help detect adversarial examples effectively, especially when the attacker does not know the underlying detection system

    Proc. 33. Workshop Computational Intelligence, Berlin, 23.-24.11.2023

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 33. Workshops „Computational Intelligence“ der vom 23.11. – 24.11.2023 in Berlin stattfindet. Die Schwerpunkte sind Methoden, Anwendungen und Tools für ° Fuzzy-Systeme, ° Künstliche Neuronale Netze, ° Evolutionäre Algorithmen und ° Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen und Benchmark-Problemen.The workshop proceedings contain the contributions of the 33rd workshop "Computational Intelligence" which will take place from 23.11. - 24.11.2023 in Berlin. The focus is on methods, applications and tools for ° Fuzzy systems, ° Artificial Neural Networks, ° Evolutionary algorithms and ° Data mining methods as well as the comparison of methods on the basis of industrial and benchmark problems

    EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

    Full text link
    Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-small changes crafted deliberately on the input to mislead the model for wrong predictions. Adversarial attacks have disastrous consequences for deep learning-empowered critical applications. Existing defense and detection techniques both require extensive knowledge of the model, testing inputs, and even execution details. They are not viable for general deep learning implementations where the model internal is unknown, a common 'black-box' scenario for model users. Inspired by the fact that electromagnetic (EM) emanations of a model inference are dependent on both operations and data and may contain footprints of different input classes, we propose a framework, EMShepherd, to capture EM traces of model execution, perform processing on traces and exploit them for adversarial detection. Only benign samples and their EM traces are used to train the adversarial detector: a set of EM classifiers and class-specific unsupervised anomaly detectors. When the victim model system is under attack by an adversarial example, the model execution will be different from executions for the known classes, and the EM trace will be different. We demonstrate that our air-gapped EMShepherd can effectively detect different adversarial attacks on a commonly used FPGA deep learning accelerator for both Fashion MNIST and CIFAR-10 datasets. It achieves a 100% detection rate on most types of adversarial samples, which is comparable to the state-of-the-art 'white-box' software-based detectors

    Realistic adversarial machine learning to improve network intrusion detection

    Get PDF
    Modern organizations can significantly benefit from the use of Artificial Intelligence (AI), and more specifically Machine Learning (ML), to tackle the growing number and increasing sophistication of cyber-attacks targeting their business processes. However, there are several technological and ethical challenges that undermine the trustworthiness of AI. One of the main challenges is the lack of robustness, which is an essential property to ensure that ML is used in a secure way. Improving robustness is no easy task because ML is inherently susceptible to adversarial examples: data samples with subtle perturbations that cause unexpected behaviors in ML models. ML engineers and security practitioners still lack the knowledge and tools to prevent such disruptions, so adversarial examples pose a major threat to ML and to the intelligent Network Intrusion Detection (NID) systems that rely on it. This thesis presents a methodology for a trustworthy adversarial robustness analysis of multiple ML models, and an intelligent method for the generation of realistic adversarial examples in complex tabular data domains like the NID domain: Adaptative Perturbation Pattern Method (A2PM). It is demonstrated that a successful adversarial attack is not guaranteed to be a successful cyber-attack, and that adversarial data perturbations can only be realistic if they are simultaneously valid and coherent, complying with the domain constraints of a real communication network and the class-specific constraints of a certain cyber-attack class. A2PM can be used for adversarial attacks, to iteratively cause misclassifications, and adversarial training, to perform data augmentation with slightly perturbed data samples. Two case studies were conducted to evaluate its suitability for the NID domain. The first verified that the generated perturbations preserved both validity and coherence in Enterprise and Internet-of Things (IoT) network scenarios, achieving realism. The second verified that adversarial training with simple perturbations enables the models to retain a good generalization to regular IoT network traffic flows, in addition to being more robust to adversarial examples. The key takeaway of this thesis is: ML models can be incredibly valuable to improve a cybersecurity system, but their own vulnerabilities must not be disregarded. It is essential to continue the research efforts to improve the security and trustworthiness of ML and of the intelligent systems that rely on it.Organizações modernas podem beneficiar significativamente do uso de Inteligência Artificial (AI), e mais especificamente Aprendizagem Automática (ML), para enfrentar a crescente quantidade e sofisticação de ciberataques direcionados aos seus processos de negócio. No entanto, há vários desafios tecnológicos e éticos que comprometem a confiabilidade da AI. Um dos maiores desafios é a falta de robustez, que é uma propriedade essencial para garantir que se usa ML de forma segura. Melhorar a robustez não é uma tarefa fácil porque ML é inerentemente suscetível a exemplos adversos: amostras de dados com perturbações subtis que causam comportamentos inesperados em modelos ML. Engenheiros de ML e profissionais de segurança ainda não têm o conhecimento nem asferramentas necessárias para prevenir tais disrupções, por isso os exemplos adversos representam uma grande ameaça a ML e aos sistemas de Deteção de Intrusões de Rede (NID) que dependem de ML. Esta tese apresenta uma metodologia para uma análise da robustez de múltiplos modelos ML, e um método inteligente para a geração de exemplos adversos realistas em domínios de dados tabulares complexos como o domínio NID: Método de Perturbação com Padrões Adaptativos (A2PM). É demonstrado que um ataque adverso bem-sucedido não é garantidamente um ciberataque bem-sucedido, e que as perturbações adversas só são realistas se forem simultaneamente válidas e coerentes, cumprindo as restrições de domínio de uma rede de computadores real e as restrições específicas de uma certa classe de ciberataque. A2PM pode ser usado para ataques adversos, para iterativamente causar erros de classificação, e para treino adverso, para realizar aumento de dados com amostras ligeiramente perturbadas. Foram efetuados dois casos de estudo para avaliar a sua adequação ao domínio NID. O primeiro verificou que as perturbações preservaram tanto a validade como a coerência em cenários de redes Empresariais e Internet-das-Coisas (IoT), alcançando o realismo. O segundo verificou que o treino adverso com perturbações simples permitiu aos modelos reter uma boa generalização a fluxos de tráfego de rede IoT, para além de serem mais robustos contra exemplos adversos. A principal conclusão desta tese é: os modelos ML podem ser incrivelmente valiosos para melhorar um sistema de cibersegurança, mas as suas próprias vulnerabilidades não devem ser negligenciadas. É essencial continuar os esforços de investigação para melhorar a segurança e a confiabilidade de ML e dos sistemas inteligentes que dependem de ML

    Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023

    Get PDF
    Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida

    Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses

    Full text link
    The ongoing deployment of the fifth generation (5G) wireless networks constantly reveals limitations concerning its original concept as a key driver of Internet of Everything (IoE) applications. These 5G challenges are behind worldwide efforts to enable future networks, such as sixth generation (6G) networks, to efficiently support sophisticated applications ranging from autonomous driving capabilities to the Metaverse. Edge learning is a new and powerful approach to training models across distributed clients while protecting the privacy of their data. This approach is expected to be embedded within future network infrastructures, including 6G, to solve challenging problems such as resource management and behavior prediction. This survey article provides a holistic review of the most recent research focused on edge learning vulnerabilities and defenses for 6G-enabled IoT. We summarize the existing surveys on machine learning for 6G IoT security and machine learning-associated threats in three different learning modes: centralized, federated, and distributed. Then, we provide an overview of enabling emerging technologies for 6G IoT intelligence. Moreover, we provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, including backdoor attacks, adversarial examples, combined attacks, poisoning attacks, Sybil attacks, byzantine attacks, inference attacks, and dropping attacks. In addition, we provide a comprehensive and detailed taxonomy and a side-by-side comparison of the state-of-the-art defense methods against edge learning vulnerabilities. Finally, as new attacks and defense technologies are realized, new research and future overall prospects for 6G-enabled IoT are discussed

    Réglage fin par ensemble convergent de divers modèles transformers pour l’analyse du sentiment

    Get PDF
    Le développement des techniques de réglage fin pour les modèles transformers pré-entraînés comme BERT (Devlin et al., 2018 [22]) a beaucoup gagné en popularité depuis 2018. Comparativement à la technique d’extraction des caractéristiques, le réglage fin donne des résultats de pointe sur de nombreuses tâches en aval liées au traitement automatique du langage naturel (Ma et al., 2019 [65]). En analyse du sentiment, ces avancées aident à surmonter plusieurs enjeux techniques importants, comme la gestion des termes de négation, des anaphores et des références syntagmatiques plus subtiles (Birjali et al., 2021 [10]). Néanmoins, certains enjeux demeurent nonrésolus, notamment la difficulté d’obtenir des données d’entraînement pour des tâches-domaines spécialisées (Xu et al., 2019 [113]). Cette problématique est liée au coût et à la difficulté de créer des jeux de données manuellement annotés par des êtres humains. Dans ce mémoire, une nouvelle solution d’annotation programmatique est proposée, s’appuyant sur les travaux de Ghashiya et Okamura (2021) [34] avec des modèles lexicaux en analyse du sentiment portant sur les titres d’actualité durant la Covid-19. Comme alternative aux modèles lexicaux, la technique proposée utilise un modèle de réglage fin par ensemble de modèles transformers appelé MDLsoft. Cette technique combine des aspects de deux stratégies de réglage fin, soit les auto-ensembles par votes (Xu et al., 2020 [115]) et l’apprentissage multitâches (Liu et al., 2019 [62]). À partir des prédictions issues du modèle final, une analyse du sentiment sera présentée portant sur le même sujet que Ghashiya et Okamura, mais en y apportant certaines améliorations techniques

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities
    • …
    corecore