78 research outputs found

    A smoothing Newton method based on the generalized Fischer-Burmeister function for MCPs

    Get PDF
    [[abstract]]We present a smooth approximation for the generalized Fischer-Burmeister function where the 2-norm in the FB function is relaxed to a general p-norm (p > 1), and establish some favorable properties for it, for example, the Jacobian consistency. With the smoothing function, we transform the mixed complementarity problem (MCP) into solving a sequence of smooth system of equations.

    Un algoritmo global con jacobiano suavizado para problemas de complementariedad no lineal

    Get PDF
    En este artículo, usamos la estrategia del jacobiano suavizado para proponer un nuevo algoritmo para resolver problemas de complementariedad no lineal basado en su reformulación como un sistema de ecuaciones no lineales. Este algoritmo puede verse como una generalización del propuesto en [18]. Desarrollamos su teoría de convergencia global y bajo ciertas hipótesis, demostramos que el algoritmo converge local y q superlineal o q cuadráticamente a la solución del problema. Pruebas numéricas muestran un buen desempeño del algoritmo propuesto. In this paper, we use the smoothing Jacobian strategy to propose a new algorithm for solving complementarity problems based on its reformulation as a nonsmooth system of equations. This algorithm can be seen as a generalization of the one proposed in [18]. We develop its global convergence theory and under certain assumptions, we demonstrate that the proposed algorithm converges locally and, q-superlinearly or q-quadratically to a solution of the problem. Some numerical experiments show a good performance of this algorithm

    A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization

    Full text link
    We propose a novel trust region method for solving a class of nonsmooth and nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence results. We further derive new normal map-based representations of the associated second-order optimality conditions that have direct connections to the local assumptions required for fast convergence. Finally, we study the behavior of our algorithm when the Hessian matrix of the smooth part of the objective function is approximated by BFGS updates. We successfully link the KL theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type condition to show superlinear convergence of the quasi-Newton version of our method. Numerical experiments on sparse logistic regression and image compression illustrate the efficiency of the proposed algorithm.Comment: 56 page

    Limited Memory BFGS method for Sparse and Large-Scale Nonlinear Optimization

    Get PDF
    Optimization-based control systems are used in many areas of application, including aerospace engineering, economics, robotics and automotive engineering. This work was motivated by the demand for a large-scale sparse solver for this problem class. The sparsity property of the problem is used for the computational efficiency regarding performance and memory consumption. This includes an efficient storing of the occurring matrices and vectors and an appropriate approximation of the Hessian matrix, which is the main subject of this work. Thus, a so-called the limited memory BFGS method has been developed. The limited memory BFGS method, has been implemented in a software library for solving the nonlinear optimization problems, WORHP. Its solving performance has been tested on different optimal control problems and test sets

    Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications

    Full text link
    In computer vision, many problems such as image segmentation, pixel labelling, and scene parsing can be formulated as binary quadratic programs (BQPs). For submodular problems, cuts based methods can be employed to efficiently solve large-scale problems. However, general nonsubmodular problems are significantly more challenging to solve. Finding a solution when the problem is of large size to be of practical interest, however, typically requires relaxation. Two standard relaxation methods are widely used for solving general BQPs--spectral methods and semidefinite programming (SDP), each with their own advantages and disadvantages. Spectral relaxation is simple and easy to implement, but its bound is loose. Semidefinite relaxation has a tighter bound, but its computational complexity is high, especially for large scale problems. In this work, we present a new SDP formulation for BQPs, with two desirable properties. First, it has a similar relaxation bound to conventional SDP formulations. Second, compared with conventional SDP methods, the new SDP formulation leads to a significantly more efficient and scalable dual optimization approach, which has the same degree of complexity as spectral methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton methods, for the dual problem. Both of them are significantly more efficiently than standard interior-point methods. In practice, the smoothing Newton solver is faster than the quasi-Newton solver for dense or medium-sized problems, while the quasi-Newton solver is preferable for large sparse/structured problems. Our experiments on a few computer vision applications including clustering, image segmentation, co-segmentation and registration show the potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT
    corecore