938 research outputs found

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    A globally convergent neurodynamics optimization model for mathematical programming with equilibrium constraints

    Get PDF
    summary:This paper introduces a neurodynamics optimization model to compute the solution of mathematical programming with equilibrium constraints (MPEC). A smoothing method based on NPC-function is used to obtain a relaxed optimization problem. The optimal solution of the global optimization problem is estimated using a new neurodynamic system, which, in finite time, is convergent with its equilibrium point. Compared to existing models, the proposed model has a simple structure, with low complexity. The new dynamical system is investigated theoretically, and it is proved that the steady state of the proposed neural network is asymptotic stable and global convergence to the optimal solution of MPEC. Numerical simulations of several examples of MPEC are presented, all of which confirm the agreement between the theoretical and numerical aspects of the problem and show the effectiveness of the proposed model. Moreover, an application to resource allocation problem shows that the new method is a simple, but efficient, and practical algorithm for the solution of real-world MPEC problems

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    A squared smoothing Newton method for semidefinite programming

    Get PDF
    This paper proposes a squared smoothing Newton method via the Huber smoothing function for solving semidefinite programming problems (SDPs). We first study the fundamental properties of the matrix-valued mapping defined upon the Huber function. Using these results and existing ones in the literature, we then conduct rigorous convergence analysis and establish convergence properties for the proposed algorithm. In particular, we show that the proposed method is well-defined and admits global convergence. Moreover, under suitable regularity conditions, i.e., the primal and dual constraint nondegenerate conditions, the proposed method is shown to have a superlinear convergence rate. To evaluate the practical performance of the algorithm, we conduct extensive numerical experiments for solving various classes of SDPs. Comparison with the state-of-the-art SDP solver {\tt {\tt SDPNAL+}} demonstrates that our method is also efficient for computing accurate solutions of SDPs.Comment: 44 page

    A Regularized Jacobi Method for Large-Scale Linear Programming

    Get PDF
    A parallel algorithm based on Jacobi iterations is proposed to minimize the augmented Lagrangian functions of the multiplier method for large-scale linear programming. Sparsity is efficiently exploited for determining stepsizes (column-wise) for the Jacobi iterations. Linear convergence is shown with convergence ratio depending on sparsity but not on the penalty parameter and on problem size. Employing simulation of parallel computations, an experimental code is tested extensively on 68 Netlib problems. Results are compared with the simplex method, an interior point algorithm and a Gauss-Seidel approach. We observe that speedup against the simplex method generally increases with the problem size, while the parallel solution times increase slowly, if at all. Our preliminary results compared with the other two methods are highly encouraging as well
    • …
    corecore