11 research outputs found

    Duality for mixed-integer convex minimization

    Get PDF
    We extend in two ways the standard Karush–Kuhn–Tucker optimality conditions to problems with a convex objective, convex functional constraints, and the extra requirement that some of the variables must be integral. While the standard Karush–Kuhn–Tucker conditions involve separating hyperplanes, our extension is based on mixed-integer-free polyhedra. Our optimality conditions allow us to define an exact dual of our original mixed-integer convex problem

    A bounded degree SOS hierarchy for polynomial optimization

    Full text link
    We consider a new hierarchy of semidefinite relaxations for the general polynomial optimization problem (P): f∗=min⁡{ f(x):x∈K }(P):\:f^{\ast}=\min \{\,f(x):x\in K\,\} on a compact basic semi-algebraic set K⊂RnK\subset\R^n. This hierarchy combines some advantages of the standard LP-relaxations associated with Krivine's positivity certificate and some advantages of the standard SOS-hierarchy. In particular it has the following attractive features: (a) In contrast to the standard SOS-hierarchy, for each relaxation in the hierarchy, the size of the matrix associated with the semidefinite constraint is the same and fixed in advance by the user. (b) In contrast to the LP-hierarchy, finite convergence occurs at the first step of the hierarchy for an important class of convex problems. Finally (c) some important techniques related to the use of point evaluations for declaring a polynomial to be zero and to the use of rank-one matrices make an efficient implementation possible. Preliminary results on a sample of non convex problems are encouraging

    A new approximation hierarchy for polynomial conic optimization

    Get PDF
    In this paper we consider polynomial conic optimization problems, where the feasible set is defined by constraints in the form of given polynomial vectors belonging to given nonempty closed convex cones, and we assume that all the feasible solutions are non-negative. This family of problems captures in particular polynomial optimization problems (POPs), polynomial semi-definite polynomial optimization problems (PSDPs) and polynomial second-order cone-optimization problems (PSOCPs). We propose a new general hierarchy of linear conic optimization relaxations inspired by an extension of PĂłlyaÊŒs Positivstellensatz for homogeneous polynomials being positive over a basic semi-algebraic cone contained in the non-negative orthant, introduced in Dickinson and Povh (J Glob Optim 61(4):615-625, 2015). We prove that based on some classic assumptions, these relaxations converge monotonically to the optimal value of the original problem. Adding a redundant polynomial positive semi-definite constraint to the original problem drastically improves the bounds produced by our method. We provide an extensive list of numerical examples that clearly indicate the advantages and disadvantages of our hierarchy. In particular, in comparison to the classic approach of sum-of-squares, our new method provides reasonable bounds on the optimal value for POPs, and strong bounds for PSDPs and PSOCPs, even outperforming the sum-of-squares approach in these latter two cases.V članku obravnavamo polinomske konične optimizacijske probleme, kjer je dopustna mnoĆŸica definirana z omejitvami, da morajo biti dani polinomski vektorji v danih nepraznih zaprtih konveksnih stoĆŸcih. Dodatno morajo dopustne reĆĄitve zadoơčati pogoju nenegativnosti. Ta druĆŸina problemov zajema zlasti klasične probleme polinomske optimizacije (POP), probleme polinomske semidefinitne optimizacije (PSDP) in probleme polinomske optimizacije nad stoĆŸci drugega reda (PSOCP). Predlagamo novo sploĆĄno hierarhijo linearnih koničnih optimizacijskih poenostavitev, ki naravno sledijo iz razĆĄiritve PĂłlya-jevega izreka o pozitivnosti iz Dickinson in Povh (J Glob Optim 61 (4): 615-625, 2015). Ob nekaterih klasičnih predpostavkah te poenostavitve monotono konvergirajo k optimalni vrednosti izvirnega problema. Kot zanimivost pokaĆŸemo, da dodajanje posebne redundantne omejitve k osnovnemu problemu ne spremeni optimalne reĆĄitve tega problema, a bistveno izboljĆĄa kvaliteto poenostavitev. V članku tudi predstavimo obseĆŸen seznam ĆĄtevilčnih primerov, ki jasno kaĆŸejo na prednosti in slabosti naĆĄe hierarhije

    Recent Advances in Randomized Methods for Big Data Optimization

    Get PDF
    In this thesis, we discuss and develop randomized algorithms for big data problems. In particular, we study the finite-sum optimization with newly emerged variance- reduction optimization methods (Chapter 2), explore the efficiency of second-order information applied to both convex and non-convex finite-sum objectives (Chapter 3) and employ the fast first-order method in power system problems (Chapter 4).In Chapter 2, we propose two variance-reduced gradient algorithms – mS2GD and SARAH. mS2GD incorporates a mini-batching scheme for improving the theoretical complexity and practical performance of SVRG/S2GD, aiming to minimize a strongly convex function represented as the sum of an average of a large number of smooth con- vex functions and a simple non-smooth convex regularizer. While SARAH, short for StochAstic Recursive grAdient algoritHm and using a stochastic recursive gradient, targets at minimizing the average of a large number of smooth functions for both con- vex and non-convex cases. Both methods fall into the category of variance-reduction optimization, and obtain a total complexity of O((n+Îș)log(1/Δ)) to achieve an Δ-accuracy solution for strongly convex objectives, while SARAH also maintains a sub-linear convergence for non-convex problems. Meanwhile, SARAH has a practical variant SARAH+ due to its linear convergence of the expected stochastic gradients in inner loops.In Chapter 3, we declare that randomized batches can be applied with second- order information, as to improve upon convergence in both theory and practice, with a framework of L-BFGS as a novel approach to finite-sum optimization problems. We provide theoretical analyses for both convex and non-convex objectives. Meanwhile, we propose LBFGS-F as a variant where Fisher information matrix is used instead of Hessian information, and prove it applicable to a distributed environment within the popular applications of least-square and cross-entropy losses.In Chapter 4, we develop fast randomized algorithms for solving polynomial optimization problems on the applications of alternating-current optimal power flows (ACOPF) in power system field. The traditional research on power system problem focuses on solvers using second-order method, while no randomized algorithms have been developed. First, we propose a coordinate-descent algorithm as an online solver, applied for solving time-varying optimization problems in power systems. We bound the difference between the current approximate optimal cost generated by our algorithm and the optimal cost for a relaxation using the most recent data from above by a function of the properties of the instance and the rate of change to the instance over time. Second, we focus on a steady-state problem in power systems, and study means of switching from solving a convex relaxation to Newton method working on a non-convex (augmented) Lagrangian of the problem

    A Lagrangian relaxation view of linear and semidefinite hierarchies

    No full text
    International audienceWe consider the general polynomial optimization problem P:f∗=min⁡{f(x) : x∈K}P: f^*=\min \{f(x)\,:\,x\in K\} where KK is a compact basic semi-algebraic set. We first show that the standard Lagrangian relaxation yields a lower bound as close as desired to the global optimum f∗f^*, provided that it is applied to a problem P~\tilde{P} equivalent to PP, in which sufficiently many redundant constraints (products of the initial ones) are added to the initial description of PP. Next we show that the standard hierarchy of LP-relaxations of PP (in the spirit of Sherali-Adams' RLT) can be interpreted as a brute force simplification of the above Lagrangian relaxation in which a nonnegative polynomial (with coefficients to be determined) is replaced with a constant polynomial equal to zero. Inspired by this interpretation, we provide a systematic improvement of the LP-hierarchy by doing a much less brutal simplification which results into a parametrized hierarchy of semidefinite programs (and not linear programs any more). For each semidefinite program in the parametrized hierarchy, the semidefinite constraint has a fixed size O(nk)O(n^k), independently of the rank in the hierarchy, in contrast with the standard hierarchy of semidefinite relaxations. The parameter kk is to be decided by the user. When applied to a non trivial class of convex problems, the first relaxation of the parametrized hierarchy is exact, in contrast with the LP-hierarchy where convergence cannot be finite. When applied to 0/1 programs it is at least as good as the first one in the hierarchy of semidefinite relaxations. However obstructions to exactness still exist and are briefly analyzed. Finally, the standard semidefinite hierarchy can also be viewed as a simplification of an extended Lagrangian relaxation, but different in spirit as sums of squares (and not scalars) multipliers are allowed
    corecore