20,012 research outputs found

    Contingency-Constrained Unit Commitment with Post-Contingency Corrective Recourse

    Full text link
    We consider the problem of minimizing costs in the generation unit commitment problem, a cornerstone in electric power system operations, while enforcing an N-k-e reliability criterion. This reliability criterion is a generalization of the well-known NN-kk criterion, and dictates that at least (1−ej)(1-e_ j) fraction of the total system demand must be met following the failures of kk or fewer system components. We refer to this problem as the Contingency-Constrained Unit Commitment problem, or CCUC. We present a mixed-integer programming formulation of the CCUC that accounts for both transmission and generation element failures. We propose novel cutting plane algorithms that avoid the need to explicitly consider an exponential number of contingencies. Computational studies are performed on several IEEE test systems and a simplified model of the Western US interconnection network, which demonstrate the effectiveness of our proposed methods relative to current state-of-the-art

    Measuring multivariate redundant information with pointwise common change in surprisal

    Get PDF
    The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables Xi . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the Xi . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples

    Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs (Extended Version)

    Get PDF
    Many exact and approximate solution methods for Markov Decision Processes (MDPs) attempt to exploit structure in the problem and are based on factorization of the value function. Especially multiagent settings, however, are known to suffer from an exponential increase in value component sizes as interactions become denser, meaning that approximation architectures are restricted in the problem sizes and types they can handle. We present an approach to mitigate this limitation for certain types of multiagent systems, exploiting a property that can be thought of as "anonymous influence" in the factored MDP. Anonymous influence summarizes joint variable effects efficiently whenever the explicit representation of variable identity in the problem can be avoided. We show how representational benefits from anonymity translate into computational efficiencies, both for general variable elimination in a factor graph but in particular also for the approximate linear programming solution to factored MDPs. The latter allows to scale linear programming to factored MDPs that were previously unsolvable. Our results are shown for the control of a stochastic disease process over a densely connected graph with 50 nodes and 25 agents.Comment: Extended version of AAAI 2016 pape

    Improved Optimal and Approximate Power Graph Compression for Clearer Visualisation of Dense Graphs

    Full text link
    Drawings of highly connected (dense) graphs can be very difficult to read. Power Graph Analysis offers an alternate way to draw a graph in which sets of nodes with common neighbours are shown grouped into modules. An edge connected to the module then implies a connection to each member of the module. Thus, the entire graph may be represented with much less clutter and without loss of detail. A recent experimental study has shown that such lossless compression of dense graphs makes it easier to follow paths. However, computing optimal power graphs is difficult. In this paper, we show that computing the optimal power-graph with only one module is NP-hard and therefore likely NP-hard in the general case. We give an ILP model for power graph computation and discuss why ILP and CP techniques are poorly suited to the problem. Instead, we are able to find optimal solutions much more quickly using a custom search method. We also show how to restrict this type of search to allow only limited back-tracking to provide a heuristic that has better speed and better results than previously known heuristics.Comment: Extended technical report accompanying the PacificVis 2013 paper of the same nam

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    A Sound and Complete Axiomatization of Majority-n Logic

    Get PDF
    Manipulating logic functions via majority operators recently drew the attention of researchers in computer science. For example, circuit optimization based on majority operators enables superior results as compared to traditional logic systems. Also, the Boolean satisfiability problem finds new solving approaches when described in terms of majority decisions. To support computer logic applications based on majority a sound and complete set of axioms is required. Most of the recent advances in majority logic deal only with ternary majority (MAJ- 3) operators because the axiomatization with solely MAJ-3 and complementation operators is well understood. However, it is of interest extending such axiomatization to n-ary majority operators (MAJ-n) from both the theoretical and practical perspective. In this work, we address this issue by introducing a sound and complete axiomatization of MAJ-n logic. Our axiomatization naturally includes existing majority logic systems. Based on this general set of axioms, computer applications can now fully exploit the expressive power of majority logic.Comment: Accepted by the IEEE Transactions on Computer

    An Optimized Resource Allocation Approach to Identify and Mitigate Supply Chain Risks using Fault Tree Analysis

    Get PDF
    Low volume high value (LVHV) supply chains such as airline manufacturing, power plant construction, and shipbuilding are especially susceptible to risks. These industries are characterized by long lead times and a limited number of suppliers that have both the technical know-how and manufacturing capabilities to deliver the requisite goods and services. Disruptions within the supply chain are common and can cause significant and costly delays. Although supply chain risk management and supply chain reliability are topics that have been studied extensively, most research in these areas focus on high vol- ume supply chains and few studies proactively identify risks. In this research, we develop methodologies to proactively and quantitatively identify and mitigate supply chain risks within LVHV supply chains. First, we propose a framework to model the supply chain system using fault-tree analysis based on the bill of material of the product being sourced. Next, we put forward a set of mathematical optimization models to proactively identify, mitigate, and resource at-risk suppliers in a LVHV supply chain with consideration for a firm’s budgetary constraints. Lastly, we propose a machine learning methodology to quan- tify the risk of an individual procurement using multiple logistic regression and industry available data, which can be used as the primary input to the fault tree when analyzing overall supply chain system risk. Altogether, the novel approaches proposed within this dissertation provide a set of tools for industry practitioners to predict supply chain risks, optimally choose which risks to mitigate, and make better informed decisions with respect to supplier selection and risk mitigation while avoiding costly delays due to disruptions in LVHV supply chains
    • …
    corecore