2 research outputs found

    A Policy Switching Approach to Consolidating Load Shedding and Islanding Protection Schemes

    Full text link
    In recent years there have been many improvements in the reliability of critical infrastructure systems. Despite these improvements, the power systems industry has seen relatively small advances in this regard. For instance, power quality deficiencies, a high number of localized contingencies, and large cascading outages are still too widespread. Though progress has been made in improving generation, transmission, and distribution infrastructure, remedial action schemes (RAS) remain non-standardized and are often not uniformly implemented across different utilities, ISOs, and RTOs. Traditionally, load shedding and islanding have been successful protection measures in restraining propagation of contingencies and large cascading outages. This paper proposes a novel, algorithmic approach to selecting RAS policies to optimize the operation of the power network during and after a contingency. Specifically, we use policy-switching to consolidate traditional load shedding and islanding schemes. In order to model and simulate the functionality of the proposed power systems protection algorithm, we conduct Monte-Carlo, time-domain simulations using Siemens PSS/E. The algorithm is tested via experiments on the IEEE-39 topology to demonstrate that the proposed approach achieves optimal power system performance during emergency situations, given a specific set of RAS policies.Comment: Full Paper Accepted to PSCC 2014 - IEEE Co-Sponsored Conference. 7 Pages, 2 Figures, 2 Table

    On Adversarial Policy Switching with Experiments in Real-Time Strategy Games

    No full text
    Given a Markov game, it is often possible to hand-code or learn a set of policies that capture a diversity of possible strategies. It is also often possible to hand-code or learn an abstract simulator of the game that can estimate the outcome of playing two strategies against one another from any state. We consider how to use such policy sets and simulators to make decisions in large Markov games such as real-time strategy (RTS) games. Prior work has considered the problem using an approach we call minimax policy switching. At each decision epoch, all policy pairs are simulated against each other from the current state, and the minimax policy is chosen and used to select actions until the next decision epoch. While intuitively appealing, our first contribution is to show that this switching policy can have arbitrarily poor worst case performance. Our second contribution is to describe a simple modification, whose worst case performance is provably no worse than the minimax fixed policy in the set. Our final contribution is to conduct experiments with these algorithms in the domain of RTS games using both an abstract game engine that we can exactly simulate and a real game engine that we can only approximately simulate. The results show the effectiveness of policy switching when the simulator is accurate, and highlight challenges in the face of inaccurate simulations
    corecore