295 research outputs found

    Variational inequalities characterizing weak minimality in set optimization

    Full text link
    We introduce the notion of weak minimizer in set optimization. Necessary and sufficient conditions in terms of scalarized variational inequalities of Stampacchia and Minty type, respectively, are proved. As an application, we obtain necessary and sufficient optimality conditions for weak efficiency of vector optimization in infinite dimensional spaces. A Minty variational principle in this framework is proved as a corollary of our main result.Comment: Includes an appendix summarizing results which are submitted but not published at this poin

    Design of First-Order Optimization Algorithms via Sum-of-Squares Programming

    Full text link
    In this paper, we propose a framework based on sum-of-squares programming to design iterative first-order optimization algorithms for smooth and strongly convex problems. Our starting point is to develop a polynomial matrix inequality as a sufficient condition for exponential convergence of the algorithm. The entries of this matrix are polynomial functions of the unknown parameters (exponential decay rate, stepsize, momentum coefficient, etc.). We then formulate a polynomial optimization, in which the objective is to optimize the exponential decay rate over the parameters of the algorithm. Finally, we use sum-of-squares programming as a tractable relaxation of the proposed polynomial optimization problem. We illustrate the utility of the proposed framework by designing a first-order algorithm that shares the same structure as Nesterov's accelerated gradient method

    solveME: fast and reliable solution of nonlinear ME models.

    Get PDF
    BackgroundGenome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints.ResultsHere, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints.ConclusionsJust as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields

    Optimality and duality for generalized fractional programming involving nonsmooth (F, ρ)-convex functions

    Get PDF
    AbstractUsing a parametric approach, we establish necessary and sufficient conditions and derive duality theorems for a class of nonsmooth generalized minimax fractional programming problems containing (F, ρ)-convex function

    Decision analysis: vector optimization theory

    Get PDF
    First published in Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales in 93, 4, 1999, published by the Real Academia de Ciencias Exactas, Físicas y Naturales

    A new algorithm for generalized fractional programs

    Get PDF
    A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual problem can be efficiently solved using a parametric approach. The resulting algorithm can be seen as “dual†to the Dinkelbach-type algorithm for generalized fractional programs since it approximates the optimal objective value of the dual (primal) problem from below. Convergence results for this algorithm are derived and an easy condition to achieve superlinear convergence is also established. Moreover, under some additional assumptions the algorithm also recovers at the same time an optimal solution of the primal problem. We also consider a variant of this new algorithm, based on scaling the “dual†parametric function. The numerical results, in case of quadratic-linear ratios and linear constraints, show that the performance of the new algorithm and its scaled version is superior to that of the Dinkelbach-type algorithms. From the computational results it also appears that contrary to the primal approach, the “dual†approach is less influenced by scaling.fractional programming;generalized fractional programming;Dinkelbach-type algorithms;quasiconvexity;Karush-Kuhn-Tucker conditions;duality

    Second-order optimality conditions for problems with C1 data

    Get PDF
    AbstractIn this paper we obtain second-order optimality conditions of Karush–Kuhn–Tucker type and Fritz John one for a problem with inequality constraints and a set constraint in nonsmooth settings using second-order directional derivatives. In the necessary conditions we suppose that the objective function and the active constraints are continuously differentiable, but their gradients are not necessarily locally Lipschitz. In the sufficient conditions for a global minimum x¯ we assume that the objective function is differentiable at x¯ and second-order pseudoconvex at x¯, a notion introduced by the authors [I. Ginchev, V.I. Ivanov, Higher-order pseudoconvex functions, in: I.V. Konnov, D.T. Luc, A.M. Rubinov (Eds.), Generalized Convexity and Related Topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, 2007, pp. 247–264], the constraints are both differentiable and quasiconvex at x¯. In the sufficient conditions for an isolated local minimum of order two we suppose that the problem belongs to the class C1,1. We show that they do not hold for C1 problems, which are not C1,1 ones. At last a new notion parabolic local minimum is defined and it is applied to extend the sufficient conditions for an isolated local minimum from problems with C1,1 data to problems with C1 one
    corecore