12,516 research outputs found

    Using a Factored Dual in Augmented Lagrangian Methods for Semidefinite Programming

    Full text link
    In the context of augmented Lagrangian approaches for solving semidefinite programming problems, we investigate the possibility of eliminating the positive semidefinite constraint on the dual matrix by employing a factorization. Hints on how to deal with the resulting unconstrained maximization of the augmented Lagrangian are given. We further use the approximate maximum of the augmented Lagrangian with the aim of improving the convergence rate of alternating direction augmented Lagrangian frameworks. Numerical results are reported, showing the benefits of the approach.Comment: 7 page

    Lagrange optimality system for a class of nonsmooth convex optimization

    Get PDF
    In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms. We apply a linear Newton method to the Lagrange optimality system to obtain a novel algorithm applicable to a variety of nonsmooth convex optimization problems arising in practical applications. Under suitable conditions, we prove the nonsingularity of the Newton system and the local convergence of the algorithm.Comment: 19 page

    An Augmented Lagrangian Neural Network for the Fixed-Time Solution of Linear Programming

    Get PDF
    In this paper, a recurrent neural network is proposed using the augmented Lagrangian method for solving linear programming problems. The design of this neural network is based on the Karush-Kuhn-Tucker (KKT) optimality conditions and on a function that guarantees fixed-time convergence. With this aim, the use of slack variables allows transforming the initial linear programming problem into an equivalent one which only contains equality constraints. Posteriorly, the activation functions of the neural network are designed as fixed time controllers to meet KKT optimality conditions. Simulations results in an academic example and an application example show the effectiveness of the neural network

    Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication

    Full text link
    We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.Comment: 28 pages, journal; revise

    A Primal-Dual Augmented Lagrangian

    Get PDF
    Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primal-dual variants of conventional primal methods are proposed: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual â„“\ell1 linearly constrained Lagrangian (pdâ„“\ell1-LCL) method
    • …
    corecore