831 research outputs found

    Approaches to the Security Analysis of Power Systems: Defence Strategies Against Malicious Threats

    Get PDF
    This report is intended to provide a conceptual framework for assessing the security risk to power systems assets and operations related to malicious attacks. The problem is analysed with reference to all the actors involved and the possible targets. The specific nature of the malicious attacks is discussed and representations in terms of strategic interaction are proposed. Models based on Game Theory and Multi Agent Systems techniques specifically developed for the representation of malicious attacks against power systems are presented and illustrated with reference to applications to small-scale test systems.JRC.G.6-Sensors, radar technologies and cybersecurit

    Strategic Learning for Active, Adaptive, and Autonomous Cyber Defense

    Full text link
    The increasing instances of advanced attacks call for a new defense paradigm that is active, autonomous, and adaptive, named as the \texttt{`3A'} defense paradigm. This chapter introduces three defense schemes that actively interact with attackers to increase the attack cost and gather threat information, i.e., defensive deception for detection and counter-deception, feedback-driven Moving Target Defense (MTD), and adaptive honeypot engagement. Due to the cyber deception, external noise, and the absent knowledge of the other players' behaviors and goals, these schemes possess three progressive levels of information restrictions, i.e., from the parameter uncertainty, the payoff uncertainty, to the environmental uncertainty. To estimate the unknown and reduce uncertainty, we adopt three different strategic learning schemes that fit the associated information restrictions. All three learning schemes share the same feedback structure of sensation, estimation, and actions so that the most rewarding policies get reinforced and converge to the optimal ones in autonomous and adaptive fashions. This work aims to shed lights on proactive defense strategies, lay a solid foundation for strategic learning under incomplete information, and quantify the tradeoff between the security and costs.Comment: arXiv admin note: text overlap with arXiv:1906.1218

    Reinforcement Learning and Game Theory for Smart Grid Security

    Get PDF
    This dissertation focuses on one of the most critical and complicated challenges facing electric power transmission and distribution systems which is their vulnerability against failure and attacks. Large scale power outages in Australia (2016), Ukraine (2015), India (2013), Nigeria (2018), and the United States (2011, 2003) have demonstrated the vulnerability of power grids to cyber and physical attacks and failures. These incidents clearly indicate the necessity of extensive research efforts to protect the power system from external intrusion and to reduce the damages from post-attack effects. We analyze the vulnerability of smart power grids to cyber and physical attacks and failures, design different gametheoretic approaches to identify the critical components vulnerable to attack and propose their associated defense strategy, and utilizes machine learning techniques to solve the game-theoretic problems in adversarial and collaborative adversarial power grid environment. Our contributions can be divided into three major parts:Vulnerability identification: Power grid outages have disastrous impacts on almost every aspect of modern life. Despite their inevitability, the effects of failures on power grids’ performance can be limited if the system operator can predict and identify the vulnerable elements of power grids. To enable these capabilities we study machine learning algorithms to identify critical power system elements adopting a cascaded failure simulator as a threat and attack model. We use generation loss, time to reach a certain percentage of line outage/generation loss, number of line outages, etc. as evaluation metrics to evaluate the consequences of threat and attacks on the smart power grid.Adversarial gaming in power system: With the advancement of the technologies, the smart attackers are deploying different techniques to supersede the existing protection scheme. In order to defend the power grid from these smart attackers, we introduce an adversarial gaming environment using machine learning techniques which is capable of replicating the complex interaction between the attacker and the power system operators. The numerical results show that a learned defender successfully narrows down the attackers’ attack window and reduce damages. The results also show that considering some crucial factors, the players can independently execute actions without detailed information about each other.Deep learning for adversarial gaming: The learning and gaming techniques to identify vulnerable components in the power grid become computationally expensive for large scale power systems. The power system operator needs to have the advanced skills to deal with the large dimensionality of the problem. In order to aid the power system operator in finding and analyzing vulnerability for large scale power systems, we study a deep learning technique for adversary game which is capable of dealing with high dimensional power system state space with less computational time and increased computational efficiency. Overall, the results provided in this dissertation advance power grids’ resilience and security by providing a better understanding of the systems’ vulnerability and by developing efficient algorithms to identify vulnerable components and appropriate defensive strategies to reduce the damages of the attack

    Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration

    Full text link
    Future AI applications require performance, reliability and privacy that the existing, cloud-dependant system architectures cannot provide. In this article, we study orchestration in the device-edge-cloud continuum, and focus on AI for edge, that is, the AI methods used in resource orchestration. We claim that to support the constantly growing requirements of intelligent applications in the device-edge-cloud computing continuum, resource orchestration needs to embrace edge AI and emphasize local autonomy and intelligence. To justify the claim, we provide a general definition for continuum orchestration, and look at how current and emerging orchestration paradigms are suitable for the computing continuum. We describe certain major emerging research themes that may affect future orchestration, and provide an early vision of an orchestration paradigm that embraces those research themes. Finally, we survey current key edge AI methods and look at how they may contribute into fulfilling the vision of future continuum orchestration.Comment: 50 pages, 8 figures (Revised content in all sections, added figures and new section

    ENHANCING PRIVACY IN MULTI-AGENT SYSTEMS

    Full text link
    La pérdida de privacidad se está convirtiendo en uno de los mayores problemas en el mundo de la informática. De hecho, la mayoría de los usuarios de Internet (que hoy en día alcanzan la cantidad de 2 billones de usuarios en todo el mundo) están preocupados por su privacidad. Estas preocupaciones también se trasladan a las nuevas ramas de la informática que están emergiendo en los ultimos años. En concreto, en esta tesis nos centramos en la privacidad en los Sistemas Multiagente. En estos sistemas, varios agentes (que pueden ser inteligentes y/o autónomos) interactúan para resolver problemas. Estos agentes suelen encapsular información personal de los usuarios a los que representan (nombres, preferencias, tarjetas de crédito, roles, etc.). Además, estos agentes suelen intercambiar dicha información cuando interactúan entre ellos. Todo esto puede resultar en pérdida de privacidad para los usuarios, y por tanto, provocar que los usuarios se muestren adversos a utilizar estas tecnologías. En esta tesis nos centramos en evitar la colección y el procesado de información personal en Sistemas Multiagente. Para evitar la colección de información, proponemos un modelo para que un agente sea capaz de decidir qué atributos (de la información personal que tiene sobre el usuario al que representa) revelar a otros agentes. Además, proporcionamos una infraestructura de agentes segura, para que una vez que un agente decide revelar un atributo a otro, sólo este último sea capaz de tener acceso a ese atributo, evitando que terceras partes puedan acceder a dicho atributo. Para evitar el procesado de información personal proponemos un modelo de gestión de las identidades de los agentes. Este modelo permite a los agentes la utilización de diferentes identidades para reducir el riesgo del procesado de información. Además, también describimos en esta tesis la implementación de dicho modelo en una plataforma de agentes.Such Aparicio, JM. (2011). ENHANCING PRIVACY IN MULTI-AGENT SYSTEMS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/13023Palanci
    corecore