168 research outputs found

    Data-driven methods to improve resource utilization, fraud detection, and cyber-resilience in smart grids

    Get PDF
    This dissertation demonstrates that empirical models of generation and consumption, constructed using machine learning and statistical methods, improve resource utilization, fraud detection, and cyber-resilience in smart grids. The modern power grid, known as the smart grid, uses computer communication networks to improve efficiency by transporting control and monitoring messages between devices. At a high level, those messages aid in ensuring that power generation meets the constantly changing power demand in a manner that minimizes costs to the stakeholders. In buildings, or nanogrids, communications between loads and centralized controls allow for more efficient electricity use. Ultimately, all efficiency improvements are enabled by data, and it is vital to protect the integrity of the data because compromised data could undermine those improvements. Furthermore, such compromise could have both economic consequences, such as power theft, and safety-critical consequences, such as blackouts. This dissertation addresses three concerns related to the smart grid: resource utilization, fraud detection, and cyber-resilience. We describe energy resource utilization benefits that can be achieved by using machine learning for renewable energy integration and also for energy management of building loads. In the context of fraud detection, we present a framework for identifying attacks that aim to make fraudulent monetary gains by compromising consumption and generation readings taken by meters. We then present machine learning, signal processing, and information-theoretic approaches for mitigating those attacks. Finally, we explore attacks that seek to undermine the resilience of the grid to faults by compromising generators' ability to compensate for lost generation elsewhere in the grid. Redundant sources of measurements are used to detect such attacks by identifying mismatches between expected and measured behavior

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    An Approach to Guide Users Towards Less Revealing Internet Browsers

    Get PDF
    When browsing the Internet, HTTP headers enable both clients and servers send extra data in their requests or responses such as the User-Agent string. This string contains information related to the sender’s device, browser, and operating system. Previous research has shown that there are numerous privacy and security risks result from exposing sensitive information in the User-Agent string. For example, it enables device and browser fingerprinting and user tracking and identification. Our large analysis of thousands of User-Agent strings shows that browsers differ tremendously in the amount of information they include in their User-Agent strings. As such, our work aims at guiding users towards using less exposing browsers. In doing so, we propose to assign an exposure score to browsers based on the information they expose and vulnerability records. Thus, our contribution in this work is as follows: first, provide a full implementation that is ready to be deployed and used by users. Second, conduct a user study to identify the effectiveness and limitations of our proposed approach. Our implementation is based on using more than 52 thousand unique browsers. Our performance and validation analysis show that our solution is accurate and efficient. The source code and data set are publicly available and the solution has been deployed

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF

    Realistic adversarial machine learning to improve network intrusion detection

    Get PDF
    Modern organizations can significantly benefit from the use of Artificial Intelligence (AI), and more specifically Machine Learning (ML), to tackle the growing number and increasing sophistication of cyber-attacks targeting their business processes. However, there are several technological and ethical challenges that undermine the trustworthiness of AI. One of the main challenges is the lack of robustness, which is an essential property to ensure that ML is used in a secure way. Improving robustness is no easy task because ML is inherently susceptible to adversarial examples: data samples with subtle perturbations that cause unexpected behaviors in ML models. ML engineers and security practitioners still lack the knowledge and tools to prevent such disruptions, so adversarial examples pose a major threat to ML and to the intelligent Network Intrusion Detection (NID) systems that rely on it. This thesis presents a methodology for a trustworthy adversarial robustness analysis of multiple ML models, and an intelligent method for the generation of realistic adversarial examples in complex tabular data domains like the NID domain: Adaptative Perturbation Pattern Method (A2PM). It is demonstrated that a successful adversarial attack is not guaranteed to be a successful cyber-attack, and that adversarial data perturbations can only be realistic if they are simultaneously valid and coherent, complying with the domain constraints of a real communication network and the class-specific constraints of a certain cyber-attack class. A2PM can be used for adversarial attacks, to iteratively cause misclassifications, and adversarial training, to perform data augmentation with slightly perturbed data samples. Two case studies were conducted to evaluate its suitability for the NID domain. The first verified that the generated perturbations preserved both validity and coherence in Enterprise and Internet-of Things (IoT) network scenarios, achieving realism. The second verified that adversarial training with simple perturbations enables the models to retain a good generalization to regular IoT network traffic flows, in addition to being more robust to adversarial examples. The key takeaway of this thesis is: ML models can be incredibly valuable to improve a cybersecurity system, but their own vulnerabilities must not be disregarded. It is essential to continue the research efforts to improve the security and trustworthiness of ML and of the intelligent systems that rely on it.Organizações modernas podem beneficiar significativamente do uso de Inteligência Artificial (AI), e mais especificamente Aprendizagem Automática (ML), para enfrentar a crescente quantidade e sofisticação de ciberataques direcionados aos seus processos de negócio. No entanto, há vários desafios tecnológicos e éticos que comprometem a confiabilidade da AI. Um dos maiores desafios é a falta de robustez, que é uma propriedade essencial para garantir que se usa ML de forma segura. Melhorar a robustez não é uma tarefa fácil porque ML é inerentemente suscetível a exemplos adversos: amostras de dados com perturbações subtis que causam comportamentos inesperados em modelos ML. Engenheiros de ML e profissionais de segurança ainda não têm o conhecimento nem asferramentas necessárias para prevenir tais disrupções, por isso os exemplos adversos representam uma grande ameaça a ML e aos sistemas de Deteção de Intrusões de Rede (NID) que dependem de ML. Esta tese apresenta uma metodologia para uma análise da robustez de múltiplos modelos ML, e um método inteligente para a geração de exemplos adversos realistas em domínios de dados tabulares complexos como o domínio NID: Método de Perturbação com Padrões Adaptativos (A2PM). É demonstrado que um ataque adverso bem-sucedido não é garantidamente um ciberataque bem-sucedido, e que as perturbações adversas só são realistas se forem simultaneamente válidas e coerentes, cumprindo as restrições de domínio de uma rede de computadores real e as restrições específicas de uma certa classe de ciberataque. A2PM pode ser usado para ataques adversos, para iterativamente causar erros de classificação, e para treino adverso, para realizar aumento de dados com amostras ligeiramente perturbadas. Foram efetuados dois casos de estudo para avaliar a sua adequação ao domínio NID. O primeiro verificou que as perturbações preservaram tanto a validade como a coerência em cenários de redes Empresariais e Internet-das-Coisas (IoT), alcançando o realismo. O segundo verificou que o treino adverso com perturbações simples permitiu aos modelos reter uma boa generalização a fluxos de tráfego de rede IoT, para além de serem mais robustos contra exemplos adversos. A principal conclusão desta tese é: os modelos ML podem ser incrivelmente valiosos para melhorar um sistema de cibersegurança, mas as suas próprias vulnerabilidades não devem ser negligenciadas. É essencial continuar os esforços de investigação para melhorar a segurança e a confiabilidade de ML e dos sistemas inteligentes que dependem de ML

    Security during the Construction of New Nuclear Power Plants: Technical Basis for Access Authorization and Fitness-For-Duty Requirements

    Full text link

    Reinforcement Learning and Game Theory for Smart Grid Security

    Get PDF
    This dissertation focuses on one of the most critical and complicated challenges facing electric power transmission and distribution systems which is their vulnerability against failure and attacks. Large scale power outages in Australia (2016), Ukraine (2015), India (2013), Nigeria (2018), and the United States (2011, 2003) have demonstrated the vulnerability of power grids to cyber and physical attacks and failures. These incidents clearly indicate the necessity of extensive research efforts to protect the power system from external intrusion and to reduce the damages from post-attack effects. We analyze the vulnerability of smart power grids to cyber and physical attacks and failures, design different gametheoretic approaches to identify the critical components vulnerable to attack and propose their associated defense strategy, and utilizes machine learning techniques to solve the game-theoretic problems in adversarial and collaborative adversarial power grid environment. Our contributions can be divided into three major parts:Vulnerability identification: Power grid outages have disastrous impacts on almost every aspect of modern life. Despite their inevitability, the effects of failures on power grids’ performance can be limited if the system operator can predict and identify the vulnerable elements of power grids. To enable these capabilities we study machine learning algorithms to identify critical power system elements adopting a cascaded failure simulator as a threat and attack model. We use generation loss, time to reach a certain percentage of line outage/generation loss, number of line outages, etc. as evaluation metrics to evaluate the consequences of threat and attacks on the smart power grid.Adversarial gaming in power system: With the advancement of the technologies, the smart attackers are deploying different techniques to supersede the existing protection scheme. In order to defend the power grid from these smart attackers, we introduce an adversarial gaming environment using machine learning techniques which is capable of replicating the complex interaction between the attacker and the power system operators. The numerical results show that a learned defender successfully narrows down the attackers’ attack window and reduce damages. The results also show that considering some crucial factors, the players can independently execute actions without detailed information about each other.Deep learning for adversarial gaming: The learning and gaming techniques to identify vulnerable components in the power grid become computationally expensive for large scale power systems. The power system operator needs to have the advanced skills to deal with the large dimensionality of the problem. In order to aid the power system operator in finding and analyzing vulnerability for large scale power systems, we study a deep learning technique for adversary game which is capable of dealing with high dimensional power system state space with less computational time and increased computational efficiency. Overall, the results provided in this dissertation advance power grids’ resilience and security by providing a better understanding of the systems’ vulnerability and by developing efficient algorithms to identify vulnerable components and appropriate defensive strategies to reduce the damages of the attack

    Cyber Threat Intelligence based Holistic Risk Quantification and Management

    Get PDF
    • …
    corecore