322 research outputs found

    Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles

    Get PDF
    The damaging effects of cyberattacks to an industry like the Cooperative Connected and Automated Mobility (CCAM) can be tremendous. From the least important to the worst ones, one can mention for example the damage in the reputation of vehicle manufacturers, the increased denial of customers to adopt CCAM, the loss of working hours (having direct impact on the European GDP), material damages, increased environmental pollution due e.g., to traffic jams or malicious modifications in sensors’ firmware, and ultimately, the great danger for human lives, either they are drivers, passengers or pedestrians. Connected vehicles will soon become a reality on our roads, bringing along new services and capabilities, but also technical challenges and security threats. To overcome these risks, the CARAMEL project has developed several anti-hacking solutions for the new generation of vehicles. CARAMEL (Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles), a research project co-funded by the European Union under the Horizon 2020 framework programme, is a project consortium with 15 organizations from 8 European countries together with 3 Korean partners. The project applies a proactive approach based on Artificial Intelligence and Machine Learning techniques to detect and prevent potential cybersecurity threats to autonomous and connected vehicles. This approach has been addressed based on four fundamental pillars, namely: Autonomous Mobility, Connected Mobility, Electromobility, and Remote Control Vehicle. This book presents theory and results from each of these technical directions

    Realistic adversarial machine learning to improve network intrusion detection

    Get PDF
    Modern organizations can significantly benefit from the use of Artificial Intelligence (AI), and more specifically Machine Learning (ML), to tackle the growing number and increasing sophistication of cyber-attacks targeting their business processes. However, there are several technological and ethical challenges that undermine the trustworthiness of AI. One of the main challenges is the lack of robustness, which is an essential property to ensure that ML is used in a secure way. Improving robustness is no easy task because ML is inherently susceptible to adversarial examples: data samples with subtle perturbations that cause unexpected behaviors in ML models. ML engineers and security practitioners still lack the knowledge and tools to prevent such disruptions, so adversarial examples pose a major threat to ML and to the intelligent Network Intrusion Detection (NID) systems that rely on it. This thesis presents a methodology for a trustworthy adversarial robustness analysis of multiple ML models, and an intelligent method for the generation of realistic adversarial examples in complex tabular data domains like the NID domain: Adaptative Perturbation Pattern Method (A2PM). It is demonstrated that a successful adversarial attack is not guaranteed to be a successful cyber-attack, and that adversarial data perturbations can only be realistic if they are simultaneously valid and coherent, complying with the domain constraints of a real communication network and the class-specific constraints of a certain cyber-attack class. A2PM can be used for adversarial attacks, to iteratively cause misclassifications, and adversarial training, to perform data augmentation with slightly perturbed data samples. Two case studies were conducted to evaluate its suitability for the NID domain. The first verified that the generated perturbations preserved both validity and coherence in Enterprise and Internet-of Things (IoT) network scenarios, achieving realism. The second verified that adversarial training with simple perturbations enables the models to retain a good generalization to regular IoT network traffic flows, in addition to being more robust to adversarial examples. The key takeaway of this thesis is: ML models can be incredibly valuable to improve a cybersecurity system, but their own vulnerabilities must not be disregarded. It is essential to continue the research efforts to improve the security and trustworthiness of ML and of the intelligent systems that rely on it.Organizações modernas podem beneficiar significativamente do uso de Inteligência Artificial (AI), e mais especificamente Aprendizagem Automática (ML), para enfrentar a crescente quantidade e sofisticação de ciberataques direcionados aos seus processos de negócio. No entanto, há vários desafios tecnológicos e éticos que comprometem a confiabilidade da AI. Um dos maiores desafios é a falta de robustez, que é uma propriedade essencial para garantir que se usa ML de forma segura. Melhorar a robustez não é uma tarefa fácil porque ML é inerentemente suscetível a exemplos adversos: amostras de dados com perturbações subtis que causam comportamentos inesperados em modelos ML. Engenheiros de ML e profissionais de segurança ainda não têm o conhecimento nem asferramentas necessárias para prevenir tais disrupções, por isso os exemplos adversos representam uma grande ameaça a ML e aos sistemas de Deteção de Intrusões de Rede (NID) que dependem de ML. Esta tese apresenta uma metodologia para uma análise da robustez de múltiplos modelos ML, e um método inteligente para a geração de exemplos adversos realistas em domínios de dados tabulares complexos como o domínio NID: Método de Perturbação com Padrões Adaptativos (A2PM). É demonstrado que um ataque adverso bem-sucedido não é garantidamente um ciberataque bem-sucedido, e que as perturbações adversas só são realistas se forem simultaneamente válidas e coerentes, cumprindo as restrições de domínio de uma rede de computadores real e as restrições específicas de uma certa classe de ciberataque. A2PM pode ser usado para ataques adversos, para iterativamente causar erros de classificação, e para treino adverso, para realizar aumento de dados com amostras ligeiramente perturbadas. Foram efetuados dois casos de estudo para avaliar a sua adequação ao domínio NID. O primeiro verificou que as perturbações preservaram tanto a validade como a coerência em cenários de redes Empresariais e Internet-das-Coisas (IoT), alcançando o realismo. O segundo verificou que o treino adverso com perturbações simples permitiu aos modelos reter uma boa generalização a fluxos de tráfego de rede IoT, para além de serem mais robustos contra exemplos adversos. A principal conclusão desta tese é: os modelos ML podem ser incrivelmente valiosos para melhorar um sistema de cibersegurança, mas as suas próprias vulnerabilidades não devem ser negligenciadas. É essencial continuar os esforços de investigação para melhorar a segurança e a confiabilidade de ML e dos sistemas inteligentes que dependem de ML

    Towards Optimization of Anomaly Detection Using Autonomous Monitors in DevOps

    Get PDF
    Continuous practices including continuous integration, continuous testing, and continuous deployment are foundations of many software development initiatives. Another very popular industrial concept, DevOps, promotes automation, collaboration, and monitoring, to even more empower development processes. The scope of this thesis is on continuous monitoring and the data collected through continuous measurement in operations as it may carry very valuable details on the health of the software system. Aim: We aim to explore and improve existing solutions for managing monitoring data in operations, instantiated in the specific industry context. Specifically, we collaborated with a Swedish company responsible for ticket management and sales in public transportation to identify challenges in the information flow from operations to development and explore approaches for improved data management inspired by state-of-the-art machine learning (ML) solutions.Research approach: Our research activities span from practice to theory and from problem to solution domain, including problem conceptualization, solution design, instantiation, and empirical validation. This complies with the main principles of the design science paradigm mainly used to frame problem-driven studies aiming to improve specific areas of practice. Results: We present identified problem instances in the case company considering the general goal of better incorporating feedback from operations to development and corresponding solution design for reducing information overflow, e.g. alert flooding, by introducing a new element, a smart filter, in the feedback loop. Therefore, we propose a simpler version of the solution design based on ML decision rules as well as a more advanced deep learning (DL) alternative. We have implemented and partially evaluated the former solution design while we present the plan for implementation and optimization of the DL version of the smart filter, as a kind of autonomous monitor. Conclusion: We propose using a smart filter to tighten and improve feedback from operations to development. The smart filter utilizes operations data to discover anomalies and timely report alerts on strange and unusual system's behavior. Full-scale implementation and empirical evaluation of the smart filter based on the DL solution will be carried out in future work

    Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles

    Get PDF
    The damaging effects of cyberattacks to an industry like the Cooperative Connected and Automated Mobility (CCAM) can be tremendous. From the least important to the worst ones, one can mention for example the damage in the reputation of vehicle manufacturers, the increased denial of customers to adopt CCAM, the loss of working hours (having direct impact on the European GDP), material damages, increased environmental pollution due e.g., to traffic jams or malicious modifications in sensors’ firmware, and ultimately, the great danger for human lives, either they are drivers, passengers or pedestrians. Connected vehicles will soon become a reality on our roads, bringing along new services and capabilities, but also technical challenges and security threats. To overcome these risks, the CARAMEL project has developed several anti-hacking solutions for the new generation of vehicles. CARAMEL (Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles), a research project co-funded by the European Union under the Horizon 2020 framework programme, is a project consortium with 15 organizations from 8 European countries together with 3 Korean partners. The project applies a proactive approach based on Artificial Intelligence and Machine Learning techniques to detect and prevent potential cybersecurity threats to autonomous and connected vehicles. This approach has been addressed based on four fundamental pillars, namely: Autonomous Mobility, Connected Mobility, Electromobility, and Remote Control Vehicle. This book presents theory and results from each of these technical directions

    Adversarial machine learning for cyber security

    Get PDF
    This master thesis aims to take advantage of state of the art and tools that have been developed in Adversarial Machine Learning (AML) and related research branches to strengthen Machine Learning (ML) models used in cyber security. First, it seeks to collect, organize and summarize the most recent and potential state-of-the-art techniques in AML, considering that it is a research branch in an unstable state with a great diversity of difficult to contrast proposals, which rapidly evolve but are quickly replaced by attacks or defenses with greater potential. This summary is important considering that the AML literature is far from being able to create defensive techniques that effectively protect a ML model from all possible attacks, and it is relevant to analyze them both in detail and with criteria in order to apply them in practice. It is also useful to find biases in state-of-the-art to be considered regarding the measurement of the attack or defense effectiveness, which can be addressed by proposing methodologies and metrics to mitigate them. Additionally, it is considered inappropriate to analyze AML in isolation, considering that the robustness of a ML model to adversarial attacks is totally related to its generalization capacity to in-distribution cases, to its robustness to out-of-distribution cases, and to the possibility of overinterpretation, using spurious (but statistically valid) patterns in the model that may give a false sense of high performance. Therefore, this thesis proposes a methodology to previously evaluate the exposure of a model to these considerations, focusing on improving it in progressive order of priorities in each of its stages, and to guarantee satisfactory overall robustness. Based on this methodology, two interesting case studies are chosen to be explored in greater depth to evaluate their robustness to adversarial attacks, perform attacks to gain insights about their strengths and weaknesses, and finally propose improvements. In this process, all kinds of approaches are used depending on the type of problem evaluated and its assumptions, performing exploratory analysis, applying AML attacks and detailing their implications, proposing improvements and implementation of defenses such as Adversarial Training, and finally creating and proposing a methodology to correctly evaluate the effectiveness of a defense avoiding the biases of the state of the art. For each of the case studies, it is possible to create efficient adversarial attacks, analyze the strengths of each model, and in the case of the second case study, it is possible to increase the adversarial robustness of a Classification Convolutional Neural Network using Adversarial Training. This leads to other positive effects on the model, such as a better representation of the data, easier implementation of techniques to detect adversarial cases through anomaly analysis, and insights concerning its performance to reinforce the model from other viewp
    • …
    corecore