124 research outputs found

    The N-K Problem in Power Grids: New Models, Formulations and Numerical Experiments (extended version)

    Get PDF
    Given a power grid modeled by a network together with equations describing the power flows, power generation and consumption, and the laws of physics, the so-called N-k problem asks whether there exists a set of k or fewer arcs whose removal will cause the system to fail. The case where k is small is of practical interest. We present theoretical and computational results involving a mixed-integer model and a continuous nonlinear model related to this question.Comment: 40 pages 3 figure

    Evaluating Resilience of Electricity Distribution Networks via A Modification of Generalized Benders Decomposition Method

    Full text link
    This paper presents a computational approach to evaluate the resilience of electricity Distribution Networks (DNs) to cyber-physical failures. In our model, we consider an attacker who targets multiple DN components to maximize the loss of the DN operator. We consider two types of operator response: (i) Coordinated emergency response; (ii) Uncoordinated autonomous disconnects, which may lead to cascading failures. To evaluate resilience under response (i), we solve a Bilevel Mixed-Integer Second-Order Cone Program which is computationally challenging due to mixed-integer variables in the inner problem and non-convex constraints. Our solution approach is based on the Generalized Benders Decomposition method, which achieves a reasonable tradeoff between computational time and solution accuracy. Our approach involves modifying the Benders cut based on structural insights on power flow over radial DNs. We evaluate DN resilience under response (ii) by sequentially computing autonomous component disconnects due to operating bound violations resulting from the initial attack and the potential cascading failures. Our approach helps estimate the gain in resilience under response (i), relative to (ii)

    The distance-based critical node detection problem : models and algorithms

    Get PDF
    In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics.In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics

    Casting Light on the Hidden Bilevel Combinatorial Structure of the Capacitated Vertex Separator Problem

    Get PDF
    Given an undirected graph, we study the capacitated vertex separator problem that asks to find a subset of vertices of minimum cardinality, the removal of which induces a graph having a bounded number of pairwise disconnected shores (subsets of vertices) of limited cardinality. The problem is of great importance in the analysis and protection of communication or social networks against possible viral attacks and for matrix decomposition algorithms. In this article, we provide a new bilevel interpretation of the problem and model it as a two-player Stackelberg game in which the leader interdicts the vertices (i.e., decides on the subset of vertices to remove), and the follower solves a combinatorial optimization problem on the resulting graph. This approach allows us to develop a computational framework based on an integer programming formulation in the natural space of the variables. Thanks to this bilevel interpretation, we derive three different families of strengthening inequalities and show that they can be separated in polynomial time. We also show how to extend these results to a min-max version of the problem. Our extensive computational study conducted on available benchmark instances from the literature reveals that our new exact method is competitive against the state-of-the-art algorithms for the capacitated vertex separator problem and is able to improve the best-known results for several difficult classes of instances. The ideas exploited in our framework can also be extended to other vertex/edge deletion/ insertion problems or graph partitioning problems by modeling them as two-player Stackel- berg games and solving them through bilevel optimization

    Mathematical Programming Formulations for the Collapsed k-Core Problem

    Full text link
    In social network analysis, the size of the k-core, i.e., the maximal induced subgraph of the network with minimum degree at least k, is frequently adopted as a typical metric to evaluate the cohesiveness of a community. We address the Collapsed k-Core Problem, which seeks to find a subset of bb users, namely the most critical users of the network, the removal of which results in the smallest possible k-core. For the first time, both the problem of finding the k-core of a network and the Collapsed k-Core Problem are formulated using mathematical programming. On the one hand, we model the Collapsed k-Core Problem as a natural deletion-round-indexed Integer Linear formulation. On the other hand, we provide two bilevel programs for the problem, which differ in the way in which the k-core identification problem is formulated at the lower level. The first bilevel formulation is reformulated as a single-level sparse model, exploiting a Benders-like decomposition approach. To derive the second bilevel model, we provide a linear formulation for finding the k-core and use it to state the lower-level problem. We then dualize the lower level and obtain a compact Mixed-Integer Nonlinear single-level problem reformulation. We additionally derive a combinatorial lower bound on the value of the optimal solution and describe some pre-processing procedures and valid inequalities for the three formulations. The performance of the proposed formulations is compared on a set of benchmarking instances with the existing state-of-the-art solver for mixed-integer bilevel problems proposed in (Fischetti et al., A New General-Purpose Algorithm for Mixed-Integer Bilevel Linear Programs, Operations Research 65(6), 2017)

    Contingency-Constrained Unit Commitment With Intervening Time for System Adjustments

    Full text link
    The N-1-1 contingency criterion considers the con- secutive loss of two components in a power system, with intervening time for system adjustments. In this paper, we consider the problem of optimizing generation unit commitment (UC) while ensuring N-1-1 security. Due to the coupling of time periods associated with consecutive component losses, the resulting problem is a very large-scale mixed-integer linear optimization model. For efficient solution, we introduce a novel branch-and-cut algorithm using a temporally decomposed bilevel separation oracle. The model and algorithm are assessed using multiple IEEE test systems, and a comprehensive analysis is performed to compare system performances across different contingency criteria. Computational results demonstrate the value of considering intervening time for system adjustments in terms of total cost and system robustness.Comment: 8 pages, 5 figure

    Synthesis, Interdiction, and Protection of Layered Networks

    Get PDF
    This research developed the foundation, theory, and framework for a set of analysis techniques to assist decision makers in analyzing questions regarding the synthesis, interdiction, and protection of infrastructure networks. This includes extension of traditional network interdiction to directly model nodal interdiction; new techniques to identify potential targets in social networks based on extensions of shortest path network interdiction; extension of traditional network interdiction to include layered network formulations; and develops models/techniques to design robust layered networks while considering trade-offs with cost. These approaches identify the maximum protection/disruption possible across layered networks with limited resources, find the most robust layered network design possible given the budget limitations while ensuring that the demands are met, include traditional social network analysis, and incorporate new techniques to model the interdiction of nodes and edges throughout the formulations. In addition, the importance and effects of multiple optimal solutions for these (and similar) models is investigated. All the models developed are demonstrated on notional examples and were tested on a range of sample problem sets
    corecore