31,493 research outputs found

    Network recovery after massive failures

    Get PDF
    This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large scale disruption. We give a formulation of the problem as an MILP and show that it is NP-hard. We propose a polynomial time heuristic, called Iterative Split and Prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared to several greedy approaches ISP performs better in terms of number of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources

    On critical service recovery after massive network failures

    Get PDF
    This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large-scale disruption. We give a formulation of the problem as a mixed integer linear programming and show that it is NP-hard. We propose a polynomial time heuristic, called iterative split and prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. ISP's decisions are guided by the use of a new notion of demand-based centrality of nodes. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared with several greedy approaches, ISP performs better in terms of total cost of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources

    Finding Streams in Knowledge Graphs to Support Fact Checking

    Full text link
    The volume and velocity of information that gets generated online limits current journalistic practices to fact-check claims at the same rate. Computational approaches for fact checking may be the key to help mitigate the risks of massive misinformation spread. Such approaches can be designed to not only be scalable and effective at assessing veracity of dubious claims, but also to boost a human fact checker's productivity by surfacing relevant facts and patterns to aid their analysis. To this end, we present a novel, unsupervised network-flow based approach to determine the truthfulness of a statement of fact expressed in the form of a (subject, predicate, object) triple. We view a knowledge graph of background information about real-world entities as a flow network, and knowledge as a fluid, abstract commodity. We show that computational fact checking of such a triple then amounts to finding a "knowledge stream" that emanates from the subject node and flows toward the object node through paths connecting them. Evaluation on a range of real-world and hand-crafted datasets of facts related to entertainment, business, sports, geography and more reveals that this network-flow model can be very effective in discerning true statements from false ones, outperforming existing algorithms on many test cases. Moreover, the model is expressive in its ability to automatically discover several useful path patterns and surface relevant facts that may help a human fact checker corroborate or refute a claim.Comment: Extended version of the paper in proceedings of ICDM 201

    Dynamic Energy Management

    Full text link
    We present a unified method, based on convex optimization, for managing the power produced and consumed by a network of devices over time. We start with the simple setting of optimizing power flows in a static network, and then proceed to the case of optimizing dynamic power flows, i.e., power flows that change with time over a horizon. We leverage this to develop a real-time control strategy, model predictive control, which at each time step solves a dynamic power flow optimization problem, using forecasts of future quantities such as demands, capacities, or prices, to choose the current power flow values. Finally, we consider a useful extension of model predictive control that explicitly accounts for uncertainty in the forecasts. We mirror our framework with an object-oriented software implementation, an open-source Python library for planning and controlling power flows at any scale. We demonstrate our method with various examples. Appendices give more detail about the package, and describe some basic but very effective methods for constructing forecasts from historical data.Comment: 63 pages, 15 figures, accompanying open source librar

    Integrated network flow model for a reliability assessment of the national electric energy system

    Get PDF
    Electric energy availability and price depend not only on the electric generation and transmission facilities, but also on the infrastructure associated to the production, transportation, and storage of coal and natural gas. As the U.S. energy system has grown more complex and interdependent, failure or degradation on the performance of one or more of its components may possibly result in more severe consequences in the overall system performance. The effects of a contingency in one or more facilities may propagate and affect the operation, in terms of availability and energy price, of other facilities in the energy grid. In this dissertation, a novel approach for analyzing the different energy subsystems in an integrated analytical framework is presented, by using a simplified representation of the energy infrastructure structured as an integrated, generalized, multi-period network flow model. The model is capable of simulating the energy system operation in terms of bulk energy movements between the different facilities and prices at different locations under different scenarios. Assessment of reliability and congestion in the grid is performed through the introduction and development of nodal price-based metrics, which prove to be especially valuable for the assessment of conditions related to changes in the capacity of one or more of the facilities. Nodal price-based metrics are developed with the specific objectives of evaluating the impact of disruptions and of assessing capacity expansion projects. These metrics are supported by studying the relationship between nodal prices and congestion using duality theory. Techniques aimed at identifying system vulnerabilities and conditions that may significantly impact availability and price of electrical energy are also developed. The techniques introduced and developed through this work are tested using 2005 data, and special effort is devoted to the modeling and study of the effects of hurricanes Katrina and Rita in the energy system. In summary, this research is a step forward in the direction of an integrated analysis of the electric subsystem and the fossil fuel production and transportation networks, by presenting a set of tools for a more comprehensive assessment of congestion, reliability, and the effects of disruptions in the U.S. energy grid

    Flexible Transmission Network Planning Considering the Impacts of Distributed Generation

    Get PDF
    The restructuring of global power industries has introduced a number of challenges, such as conflicting planning objectives and increasing uncertainties,to transmission network planners. During the recent past, a number of distributed generation technologies also reached a stage allowing large scale implementation, which will profoundly influence the power industry, as well as the practice of transmission network expansion. In the new market environment, new approaches are needed to meet the above challenges. In this paper, a market simulation based method is employed to assess the economical attractiveness of different generation technologies, based on which future scenarios of generation expansion can be formed. A multi-objective optimization model for transmission expansion planning is then presented. A novel approach is proposed to select transmission expansion plans that are flexible given the uncertainties of generation expansion, system load and other market variables. Comprehensive case studies will be conducted to investigate the performance of our approach. In addition, the proposed method will be employed to study the impacts of distributed generation, especially on transmission expansion planning.

    Competent genetic-evolutionary optimization of water distribution systems

    Get PDF
    A genetic algorithm has been applied to the optimal design and rehabilitation of a water distribution system. Many of the previous applications have been limited to small water distribution systems, where the computer time used for solving the problem has been relatively small. In order to apply genetic and evolutionary optimization technique to a large-scale water distribution system, this paper employs one of competent genetic-evolutionary algorithms - a messy genetic algorithm to enhance the efficiency of an optimization procedure. A maximum flexibility is ensured by the formulation of a string and solution representation scheme, a fitness definition, and the integration of a well-developed hydraulic network solver that facilitate the application of a genetic algorithm to the optimization of a water distribution system. Two benchmark problems of water pipeline design and a real water distribution system are presented to demonstrate the application of the improved technique. The results obtained show that the number of the design trials required by the messy genetic algorithm is consistently fewer than the other genetic algorithms

    Heuristics with Performance Guarantees for the Minimum Number of Matches Problem in Heat Recovery Network Design

    Get PDF
    Heat exchanger network synthesis exploits excess heat by integrating process hot and cold streams and improves energy efficiency by reducing utility usage. Determining provably good solutions to the minimum number of matches is a bottleneck of designing a heat recovery network using the sequential method. This subproblem is an NP-hard mixed-integer linear program exhibiting combinatorial explosion in the possible hot and cold stream configurations. We explore this challenging optimization problem from a graph theoretic perspective and correlate it with other special optimization problems such as cost flow network and packing problems. In the case of a single temperature interval, we develop a new optimization formulation without problematic big-M parameters. We develop heuristic methods with performance guarantees using three approaches: (i) relaxation rounding, (ii) water filling, and (iii) greedy packing. Numerical results from a collection of 51 instances substantiate the strength of the methods
    • …
    corecore