4,869 research outputs found

    DOLFIN: Automated Finite Element Computing

    Get PDF
    We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines a high level of expressiveness with efficient computation. Finite element variational forms may be expressed in near mathematical notation, from which low-level code is automatically generated, compiled and seamlessly integrated with efficient implementations of computational meshes and high-performance linear algebra. Easy-to-use object-oriented interfaces to the library are provided in the form of a C++ library and a Python module. This paper discusses the mathematical abstractions and methods used in the design of the library and its implementation. A number of examples are presented to demonstrate the use of the library in application code

    Matrix Minor Reformulation and SOCP-based Spatial Branch-and-Cut Method for the AC Optimal Power Flow Problem

    Full text link
    Alternating current optimal power flow (AC OPF) is one of the most fundamental optimization problems in electrical power systems. It can be formulated as a semidefinite program (SDP) with rank constraints. Solving AC OPF, that is, obtaining near optimal primal solutions as well as high quality dual bounds for this non-convex program, presents a major computational challenge to today's power industry for the real-time operation of large-scale power grids. In this paper, we propose a new technique for reformulation of the rank constraints using both principal and non-principal 2-by-2 minors of the involved Hermitian matrix variable and characterize all such minors into three types. We show the equivalence of these minor constraints to the physical constraints of voltage angle differences summing to zero over three- and four-cycles in the power network. We study second-order conic programming (SOCP) relaxations of this minor reformulation and propose strong cutting planes, convex envelopes, and bound tightening techniques to strengthen the resulting SOCP relaxations. We then propose an SOCP-based spatial branch-and-cut method to obtain the global optimum of AC OPF. Extensive computational experiments show that the proposed algorithm significantly outperforms the state-of-the-art SDP-based OPF solver and on a simple personal computer is able to obtain on average a 0.71% optimality gap in no more than 720 seconds for the most challenging power system instances in the literature

    Solving rank-constrained semidefinite programs in exact arithmetic

    Full text link
    We consider the problem of minimizing a linear function over an affine section of the cone of positive semidefinite matrices, with the additional constraint that the feasible matrix has prescribed rank. When the rank constraint is active, this is a non-convex optimization problem, otherwise it is a semidefinite program. Both find numerous applications especially in systems control theory and combinatorial optimization, but even in more general contexts such as polynomial optimization or real algebra. While numerical algorithms exist for solving this problem, such as interior-point or Newton-like algorithms, in this paper we propose an approach based on symbolic computation. We design an exact algorithm for solving rank-constrained semidefinite programs, whose complexity is essentially quadratic on natural degree bounds associated to the given optimization problem: for subfamilies of the problem where the size of the feasible matrix is fixed, the complexity is polynomial in the number of variables. The algorithm works under assumptions on the input data: we prove that these assumptions are generically satisfied. We also implement it in Maple and discuss practical experiments.Comment: Published at ISSAC 2016. Extended version submitted to the Journal of Symbolic Computatio

    Block stochastic gradient iteration for convex and nonconvex optimization

    Full text link
    The stochastic gradient (SG) method can minimize an objective function composed of a large number of differentiable functions, or solve a stochastic optimization problem, to a moderate accuracy. The block coordinate descent/update (BCD) method, on the other hand, handles problems with multiple blocks of variables by updating them one at a time; when the blocks of variables are easier to update individually than together, BCD has a lower per-iteration cost. This paper introduces a method that combines the features of SG and BCD for problems with many components in the objective and with multiple (blocks of) variables. Specifically, a block stochastic gradient (BSG) method is proposed for solving both convex and nonconvex programs. At each iteration, BSG approximates the gradient of the differentiable part of the objective by randomly sampling a small set of data or sampling a few functions from the sum term in the objective, and then, using those samples, it updates all the blocks of variables in either a deterministic or a randomly shuffled order. Its convergence for both convex and nonconvex cases are established in different senses. In the convex case, the proposed method has the same order of convergence rate as the SG method. In the nonconvex case, its convergence is established in terms of the expected violation of a first-order optimality condition. The proposed method was numerically tested on problems including stochastic least squares and logistic regression, which are convex, as well as low-rank tensor recovery and bilinear logistic regression, which are nonconvex

    Self-Calibration and Biconvex Compressive Sensing

    Full text link
    The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where both x and the diagonal matrix D (which models the calibration error) are unknown. By "lifting" this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis
    • ā€¦
    corecore