1,442 research outputs found

    Matrix Minor Reformulation and SOCP-based Spatial Branch-and-Cut Method for the AC Optimal Power Flow Problem

    Full text link
    Alternating current optimal power flow (AC OPF) is one of the most fundamental optimization problems in electrical power systems. It can be formulated as a semidefinite program (SDP) with rank constraints. Solving AC OPF, that is, obtaining near optimal primal solutions as well as high quality dual bounds for this non-convex program, presents a major computational challenge to today's power industry for the real-time operation of large-scale power grids. In this paper, we propose a new technique for reformulation of the rank constraints using both principal and non-principal 2-by-2 minors of the involved Hermitian matrix variable and characterize all such minors into three types. We show the equivalence of these minor constraints to the physical constraints of voltage angle differences summing to zero over three- and four-cycles in the power network. We study second-order conic programming (SOCP) relaxations of this minor reformulation and propose strong cutting planes, convex envelopes, and bound tightening techniques to strengthen the resulting SOCP relaxations. We then propose an SOCP-based spatial branch-and-cut method to obtain the global optimum of AC OPF. Extensive computational experiments show that the proposed algorithm significantly outperforms the state-of-the-art SDP-based OPF solver and on a simple personal computer is able to obtain on average a 0.71% optimality gap in no more than 720 seconds for the most challenging power system instances in the literature

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    Integrated Inference and Learning of Neural Factors in Structural Support Vector Machines

    Get PDF
    Tackling pattern recognition problems in areas such as computer vision, bioinformatics, speech or text recognition is often done best by taking into account task-specific statistical relations between output variables. In structured prediction, this internal structure is used to predict multiple outputs simultaneously, leading to more accurate and coherent predictions. Structural support vector machines (SSVMs) are nonprobabilistic models that optimize a joint input-output function through margin-based learning. Because SSVMs generally disregard the interplay between unary and interaction factors during the training phase, final parameters are suboptimal. Moreover, its factors are often restricted to linear combinations of input features, limiting its generalization power. To improve prediction accuracy, this paper proposes: (i) Joint inference and learning by integration of back-propagation and loss-augmented inference in SSVM subgradient descent; (ii) Extending SSVM factors to neural networks that form highly nonlinear functions of input features. Image segmentation benchmark results demonstrate improvements over conventional SSVM training methods in terms of accuracy, highlighting the feasibility of end-to-end SSVM training with neural factors

    Using Functional Programming to recognize Named Structure in an Optimization Problem: Application to Pooling

    Get PDF
    Branch-and-cut optimization solvers typically apply generic algorithms, e.g., cutting planes or primal heuristics, to expedite performance for many mathematical optimization problems. But solver software receives an input optimization problem as vectors of equations and constraints containing no structural information. This article proposes automatically detecting named special structure using the pattern matching features of functional programming. Specifically, we deduce the industrially-relevant nonconvex nonlinear Pooling Problem within a mixed-integer nonlinear optimization problem and show that we can uncover pooling structure in optimization problems which are not pooling problems. Previous work has shown that preprocessing heuristics can find network structures; we show that we can additionally detect nonlinear pooling patterns. Finding named structures allows us to apply, to generic optimization problems, cutting planes or primal heuristics developed for the named structure. To demonstrate the recognition algorithm, we use the recognized structure to apply primal heuristics to a test set of standard pooling problems
    corecore