1,160 research outputs found

    Optimizing Interconnectivity among Networks under Attacks

    Get PDF
    Networks may need to be interconnected for various reasons such as inter-organizational communication, redundant connectivity, increasing data-rate and minimizing delay or packet-loss, etc. However, the trustworthiness of an added interconnection link cannot be taken for granted due to the presence of attackers who may compromise the security of an interconnected network by intercepting the interconnections. Namely, an intercepted interconnection link may not be secured due to the data manipulations by attackers. In the first part of this dissertation, the number of interconnections between the two networks is optimized for maximizing the data-rate and minimizing the packet-loss under the threat of security attacks. The optimization of the interconnectivity considering the security attack is formulated using a rate-distortion optimization setting, as originally introduced by Claude E. Shannon in the information theory. In particular, each intercepted interconnection is modeled as a noisy communication channel where the attackers may manipulate the data by flipping and erasing of data bits, and then the total capacity for any given number of interconnections is calculated. By exploiting such formulation, the optimal number of interconnections between two networks is found under network administrators data-rate and packet-loss requirement, and most importantly, without compromising the data security. It is concluded analytically and verified by simulations under certain conditions, increasing interconnections beyond an optimal number would not be beneficial concerning the data-rates and packet-loss. In the second part of this dissertation, the vulnerability of the interconnected network is analyzed by a probabilistic model that maps the intensity of physical attacks to network component failure distributions. Also, assuming the network is susceptible to the attack propagation, the resiliency of the network is modeled by the influence model and epidemic model. Finally, a stochastic model is proposed to track the node failure dynamics in a network considering dependency with power failures. Besides, the cascading failure in the power grid is analyzed with a data-driven model that reproduces the evolution of power-transmission line failure in power grids. To summarize, the optimal interconnectivity among networks is analyzed under security attacks, and the dynamic interactions in an interconnected network are investigated under various physical and logical attacks. The proper application of this work would add the minimum number of inter-network connections between two networks without compromising the data security. The optimal number interconnections would meet network administrator’s requirement and minimize cost (both security and monetary) associated with unnecessary connections. This work can also be used to estimate the reliability of a communication network under different types of physical attacks independently and also by incorporating the dynamics of power failures

    Optimization Techniques for Modern Power Systems Planning, Operation and Control

    Get PDF
    Recent developments in computing, communication and improvements in optimization techniques have piqued interest in improving the current operational practices and in addressing the challenges of future power grids. This dissertation leverages these new developments for improved quasi-static analysis of power systems for applications in power system planning, operation and control. The premise of much of the work presented in this dissertation centers around development of better mathematical modeling for optimization problems which are then used to solve current and future challenges of power grid. To this end, the models developed in this research work contributes to the area of renewable integration, demand response, power grid resilience and constrained contiguous and non-contiguous partitioning of power networks. The emphasis of this dissertation is on finding solutions to system operator level problems in real-time. For instance, multi-period mixed integer linear programming problem for applications in demand response schemes involving more than million variables are solved to optimality in less than 20 seconds of computation time through tighter formulation. A balanced, constrained, contiguous partitioning scheme capable of partitioning 20,000 bus power system in under one minute is developed for use in time sensitive application area such as controlled islanding

    Advancements in Enhancing Resilience of Electrical Distribution Systems: A Review on Frameworks, Metrics, and Technological Innovations

    Full text link
    This comprehensive review paper explores power system resilience, emphasizing its evolution, comparison with reliability, and conducting a thorough analysis of the definition and characteristics of resilience. The paper presents the resilience frameworks and the application of quantitative power system resilience metrics to assess and quantify resilience. Additionally, it investigates the relevance of complex network theory in the context of power system resilience. An integral part of this review involves examining the incorporation of data-driven techniques in enhancing power system resilience. This includes the role of data-driven methods in enhancing power system resilience and predictive analytics. Further, the paper explores the recent techniques employed for resilience enhancement, which includes planning and operational techniques. Also, a detailed explanation of microgrid (MG) deployment, renewable energy integration, and peer-to-peer (P2P) energy trading in fortifying power systems against disruptions is provided. An analysis of existing research gaps and challenges is discussed for future directions toward improvements in power system resilience. Thus, a comprehensive understanding of power system resilience is provided, which helps in improving the ability of distribution systems to withstand and recover from extreme events and disruptions

    Reliability and resilience evaluation of distribution automation.

    Get PDF
    Modern distribution grid utilities are steadily adapting to the concepts of Smart Grids by augmenting distribution grids with Distribution Automation (DA) to enhance visibility and control for the purpose of enhanced system availability. Existing methods to place and evaluate DA overlook the important enhancement it provides to the resilience of a system. Resilience, a much-discussed but poorly defined measure for power systems, represents a system’s ability to withstand and recover from High-Impact Low-Probability (HILP) events such as storms and earthquakes. This thesis argues that exisiting resilience quantification methods do not capture the direct contribution which DA can make to enhance system resilience. It develops a novel model and methodology to analyse distribution grid resilience using the formalisms of Reliability Graphs (RGs), and Stochastic Reward Nets (SRNs). These two models capture the different parts of the complex recovery process which distribution grids perform to recover from faults using DA. There are three novel contributions in this thesis. Firstly, a three-tier hierarchical model which contains an RG is developed to assess the enhancement which DA equipment provides to load point and feeder availability and resilience. Next, and SRN is used to develop a load point (LP) model which incorporates the dependence of feeder assets during the fault isolation phase of the recovery process. Finally, the SRN model is augmented with a phased recovery model to represent the complex recovery process for distribution grids. Utilising these models, the placement of switch automation and fault indicators is evaluated, and the contribution they make to resilience demonstrated. Collectively, these models give a novel means of assessing the the availability, sensitivity and resilience of distribution grids which utilise DA

    A Quantitative Research Study on Probability Risk Assessments in Critical Infrastructure and Homeland Security

    Get PDF
    This dissertation encompassed quantitative research on probabilistic risk assessment (PRA) elements in homeland security and the impact on critical infrastructure and key resources. There are 16 crucial infrastructure sectors in homeland security that represent assets, system networks, virtual and physical environments, roads and bridges, transportation, and air travel. The design included the Bayes theorem, a process used in PRAs when determining potential or probable events, causes, outcomes, and risks. The goal is to mitigate the effects of domestic terrorism and natural and man-made disasters, respond to events related to critical infrastructure that can impact the United States, and help protect and secure natural gas pipelines and electrical grid systems. This study provides data from current risk assessment trends in PRAs that can be applied and designed in elements of homeland security and the criminal justice system to help protect critical infrastructures. The dissertation will highlight the aspects of the U.S. Department of Homeland Security National Infrastructure Protection Plan (NIPP). In addition, this framework was employed to examine the criminal justice triangle, explore crime problems and emergency preparedness solutions to protect critical infrastructures, and analyze data relevant to risk assessment procedures for each critical infrastructure identified. Finally, the study addressed the drivers and gaps in research related to protecting and securing natural gas pipelines and electrical grid systems

    Beyond The Cloud, How Should Next Generation Utility Computing Infrastructures Be Designed?

    Get PDF
    To accommodate the ever-increasing demand for Utility Computing (UC) resources, while taking into account both energy and economical issues, the current trend consists in building larger and larger data centers in a few strategic locations. Although such an approach enables to cope with the actual demand while continuing to operate UC resources through centralized software system, it is far from delivering sustainable and efficient UC infrastructures. We claim that a disruptive change in UC infrastructures is required: UC resources should be managed differently, considering locality as a primary concern. We propose to leverage any facilities available through the Internet in order to deliver widely distributed UC platforms that can better match the geographical dispersal of users as well as the unending demand. Critical to the emergence of such locality-based UC (LUC) platforms is the availability of appropriate operating mechanisms. In this paper, we advocate the implementation of a unified system driving the use of resources at an unprecedented scale by turning a complex and diverse infrastructure into a collection of abstracted computing facilities that is both easy to operate and reliable. By deploying and using such a LUC Operating System on backbones, our ultimate vision is to make possible to host/operate a large part of the Internet by its internal structure itself: A scalable and nearly infinite set of resources delivered by any computing facilities forming the Internet, starting from the larger hubs operated by ISPs, government and academic institutions to any idle resources that may be provided by end-users. Unlike previous researches on distributed operating systems, we propose to consider virtual machines (VMs) instead of processes as the basic element. System virtualization offers several capabilities that increase the flexibility of resources management, allowing to investigate novel decentralized schemes.Afin de supporter la demande croissante de calcul utilitaire (UC) tout en prenant en compte les aspects Ă©nergĂ©tique et financier, la tendance actuelle consiste Ă  construire des centres de donnĂ©es (ou centrales numĂ©riques) de plus en plus grands dans un nombre limitĂ© de lieux stratĂ©giques. Cette approche permet sans aucun doute de satisfaire la demande tout en conservant une approche centralisĂ©e de la gestion de ces ressources mais elle reste loin de pouvoir fournir des infrastructures de calcul utilitaire efficaces et durables. AprĂšs avoir indiquĂ© pourquoi cette tendance n'est pas appropriĂ©e, nous proposons au travers de ce rapport, une proposition radicalement diffĂ©rente. De notre point de vue, les ressources de calcul utilitaire doivent ĂȘtre gĂ©rĂ©es de maniĂšre Ă  pouvoir prendre en compte la localitĂ© des demandes dĂšs le dĂ©part. Pour ce faire, nous proposons de tirer parti de tous les Ă©quipements disponibles sur l'Internet afin de fournir des infrastructures de calcul utilitaire qui permettront de part leur distribution de prendre en compte plus efficacement la dispersion gĂ©ographique des utilisateurs et leur demande toujours croissante. Un des aspects critique pour l'Ă©mergence de telles plates-formes de calcul utilitaire ''local'' (LUC) est la disponibilitĂ© de mĂ©canismes de gestion appropriĂ©s. Dans la deuxiĂšme partie de ce document, nous dĂ©fendons la mise en oeuvre d'un systĂšme unifiĂ© gĂ©rant l'utilisation des ressources Ă  une Ă©chelle sans prĂ©cĂ©dent en transformant une infrastructure complexe et hĂ©tĂ©rogĂšne en une collection d'Ă©quipements virtualisĂ©s qui seront Ă  la fois plus simples Ă  gĂ©rer et plus sĂ»rs. En dĂ©ployant un systĂšme de type LUC sur les coeurs de rĂ©seau, notre vision ultime est de rendre possible l'hĂ©bergement et la gestion de l'Internet sur sa propre infrastructure interne: un ensemble de ressources extensible et quasiment infini fourni par n'importe quel Ă©quipement constituant l'Internet, partant des gros noeud rĂ©seaux gĂ©rĂ©s par les ISPs, les gouvernements et les institutions acadĂšmiques jusqu'Ă  n'importe quelle ressource inactive fournie par les utilisateurs finaux. Contrairement aux approches prĂ©cĂ©dentes appliquĂ©es aux systĂšmes distribuĂ©s, nous proposons de considĂ©rer les machines virtuelles comme la granularitĂ© Ă©lĂ©mentaire du systĂšme (Ă  la place des processus). La virtualisation systĂšme offre plusieurs fonctionnalitĂ©s qui amĂ©liorent la flexibilitĂ© de la gestion de ressources, permettant l'Ă©tude de nouveaux schĂ©mas de dĂ©centralisation

    Vertical Industries Requirements Analysis & Targeted KPIs for Advanced 5G Trials

    Full text link
    Just before the commercial roll out of European 5G networks, 5G trials in realistic environments have been recently initiated all around Europe, as pat of the Phase 3 projects of 5GPPP H2020 program [1]. The goal is to showcase 5G's capabilities and to convince stakeholders about its value adding business potential. The approach is to offer advanced 5G connectivity to real vertical industries and showcase how it enables them to overcome existing 4G network limitations and other long standing issues. The 5G EVE H2020 5GPPP project [2] offers cutting edge 5G end to end facilities (in 4 countries) to diversified vertical industry experimenters. The objective is to understand the needs of prominent industries across Europe and to offer tailor made 5G experience to each and every one of them. This paper contributes to the understanding of vertical services' needs, by offering a thorough and concise vertical requirements analysis methodology, including an examination of the 4G limitations. It also provides real life values for the targeted KPIs of three vertical sectors namely, Smart Industry (4.0), Smart Cities / Health and Smart Energy, while assisting market roll out by prioritizing their connectivity needs.Comment: EuCNC 201

    Sustainable Infrastructure and South Mountain Village: Building Energy Use

    Get PDF
    abstract: This report examines the energy infrastructure in the South Mountain Village of Phoenix AZ. The report is in support of the Rio Grande 2.0 project being implemented by the City of Phoenix in conjunction with Arizona State University. The report focuses on a small section of the village, for which we create energy demand profiles, solar generation profiles, and solar + storage generation profiles. We utilize these profiles to demonstrate the impact that neighborhood solar will have on the grid. We additionally research SRP’s deployment of smart grid technologies and SRP’s plans for the future of their power system. The report examines the benefits, and challenges of microgrid development in South Mountain Village. We undertake this study to identify strategies that increase energy efficiency, that implement resilient and redundant systems in the existing energy grid, and that provide flexibility and adaptability to the community’s energy systems. Deploying these strategies will ensure the sustained provision of energy to the community in the event of catastrophic events. We demonstrate that the installation of rooftop solar photovoltaics on residential buildings in conjunction with battery storage systems proves more than sufficient to provide power to the residents of South Mountain Village. We explore the benefits and challenges for the development of smart grid infrastructure and microgrid networks in the village. We determine that the implementation of a smart grid and a parallel microgrid improves the resiliency of the Village’s energy systems. While SRP has managed to make progressive steps forward in implementing Smart Grid technologies, they can continue this progression by developing a unified communication system that is secure through cyber security measures to allow for reliable energy service to their customers. A hybrid development of smart grid and microgrid technologies in the village that employs rooftop solar photovoltaics and battery storage will provide community members with the resilient energy infrastructure they require in a future which entails multiplied risks of catastrophic events like increased heat waves and cyber attacks

    Geodesic vulnerability approach for identification of critical buses in power systems

    Get PDF
    One of the most critical issues in the evaluation of power systems is the identification of critical buses. For this purpose, this paper proposes a new methodology that evaluates the substitution of the power flow technique by the geodesic vulnerability index to identify critical nodes in power grids. Both methods are applied comparatively to demonstrate the scope of the proposed approach. The applicability of the methodology is illustrated using the IEEE 118-bus test system as a case study. To identify the critical components, a node is initially disconnected, and the performance of the resulting topology is evaluated in the face of simulations for multiple cascading faults. Cascading events are simulated by randomly removing assets on a system that continually changes its structure with the elimination of each component. Thus, the classification of the critical nodes is determined by evaluating the resulting performance of 118 different topologies and calculating the damage area for each of the disintegration curves of cascading failures. In summary, the feasibility and suitability of complex network theory are justified to identify critical nodes in power systems
    • 

    corecore