41,956 research outputs found

    Vulnerability Assessment of Large-scale Power Systems to False Data Injection Attacks

    Full text link
    This paper studies the vulnerability of large-scale power systems to false data injection (FDI) attacks through their physical consequences. Prior work has shown that an attacker-defender bi-level linear program (ADBLP) can be used to determine the worst-case consequences of FDI attacks aiming to maximize the physical power flow on a target line. This ADBLP can be transformed into a single-level mixed-integer linear program, but it is hard to solve on large power systems due to numerical difficulties. In this paper, four computationally efficient algorithms are presented to solve the attack optimization problem on large power systems. These algorithms are applied on the IEEE 118-bus system and the Polish system with 2383 buses to conduct vulnerability assessments, and they provide feasible attacks that cause line overflows, as well as upper bounds on the maximal power flow resulting from any attack.Comment: 6 pages, 5 figure

    A bibliography on parallel and vector numerical algorithms

    Get PDF
    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a Îș\kappa-conditioned problem in O(Îș)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(Îș1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(Îș)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    Symmetry groups, semidefinite programs, and sums of squares

    Full text link
    We investigate the representation of symmetric polynomials as a sum of squares. Since this task is solved using semidefinite programming tools we explore the geometric, algebraic, and computational implications of the presence of discrete symmetries in semidefinite programs. It is shown that symmetry exploitation allows a significant reduction in both matrix size and number of decision variables. This result is applied to semidefinite programs arising from the computation of sum of squares decompositions for multivariate polynomials. The results, reinterpreted from an invariant-theoretic viewpoint, provide a novel representation of a class of nonnegative symmetric polynomials. The main theorem states that an invariant sum of squares polynomial is a sum of inner products of pairs of matrices, whose entries are invariant polynomials. In these pairs, one of the matrices is computed based on the real irreducible representations of the group, and the other is a sum of squares matrix. The reduction techniques enable the numerical solution of large-scale instances, otherwise computationally infeasible to solve.Comment: 38 pages, submitte

    Lagrangean decomposition for large-scale two-stage stochastic mixed 0-1 problems

    Get PDF
    In this paper we study solution methods for solving the dual problem corresponding to the Lagrangean Decomposition of two stage stochastic mixed 0-1 models. We represent the two stage stochastic mixed 0-1 problem by a splitting variable representation of the deterministic equivalent model, where 0-1 and continuous variables appear at any stage. Lagrangean Decomposition is proposed for satisfying both the integrality constraints for the 0-1 variables and the non-anticipativity constraints. We compare the performance of four iterative algorithms based on dual Lagrangean Decomposition schemes, as the Subgradient method, the Volume algorithm, the Progressive Hedging algorithm and the Dynamic Constrained Cutting Plane scheme. We test the conditions and properties of convergence for medium and large-scale dimension stochastic problems. Computational results are reported.Progressive Hedging algorithm, volume algorithm, Lagrangean decomposition, subgradient method
    • 

    corecore