261 research outputs found

    Primal-Dual Active-Set Methods for Convex Quadratic Optimization with Applications

    Get PDF
    Primal-dual active-set (PDAS) methods are developed for solving quadratic optimization problems (QPs). Such problems arise in their own right in optimal control and statistics–two applications of interest considered in this dissertation–and as subproblems when solving nonlinear optimization problems. PDAS methods are promising as they possess the same favorable properties as other active-set methods, such as their ability to be warm-started and to obtain highly accurate solutions by explicitly identifying sets of constraints that are active at an optimal solution. However, unlike traditional active-set methods, PDAS methods have convergence guarantees despite making rapid changes in active-set estimates, making them well suited for solving large-scale problems.Two PDAS variants are proposed for efficiently solving generally-constrained convex QPs. Both variants ensure global convergence of the iterates by enforcing montonicity in a measure of progress. Besides identifying an estimate set estimate, a novel uncertain set is introduced into the framework in order to house indices of variables that have been identified as being susceptible to cycling. The introduction of the uncertainty set guarantees convergence of the algorithm, and with techniques proposed to keep the set from expanding quickly, the practical performance of the algorithm is shown to be very efficient. Another PDAS variant is proposed for solving certain convex QPs that commonly arise when discretizing optimal control problems. The proposed framework allows inexactness in the subproblem solutions, which can significantly reduce computational cost in large-scale settings. By controlling the level inexactness either by exploiting knowledge of an upper bound of a matrix inverse or by dynamic estimation of such a value, the method achieves convergence guarantees and is shown to outperform a method that employs exact solutions computed by direct factorization techniques.Finally, the application of PDAS techniques for applications in statistics, variants are proposed for solving isotonic regression (IR) and trend filtering (TR) problems. It is shown that PDAS can solve an IR problem with n data points with only O(n) arithmetic operations. Moreover, the method is shown to outperform the state-of-the-art method for solving IR problems, especially when warm-starting is considered. Enhancements to themethod are proposed for solving general TF problems, and numerical results are presented to show that PDAS methods are viable for a broad class of such problems

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

    Full text link
    We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. In order to further speed up the computation, several strategies have been exploited, including model reduction, warm start and removal of inactive constraints. We analyze the performance of the proposed method under different settings, and demonstrate that our method either outperforms or performs on par with state-of-the-art approaches. Especially when the connectivities are dense or when the relative magnitudes of the unary costs are low, we achieve the best reported results. Experiments show that the proposed algorithm achieves better approximation than the state-of-the-art methods within a variety of time budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page

    Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates

    Get PDF
    The primary focus of this dissertation is the design, analysis, and implementation of numerical methods to enhance Sequential Quadratic Optimization (SQO) methods for solving nonlinear constrained optimization problems. These enhancements address issues that challenge the practical limitations of SQO methods. The first part of this dissertation presents a penalty SQO algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A two-phase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first-phase subproblem determines the reduction that can be obtained in a local model of constraint violation. The second-phase subproblem seeks to minimize a local model of a penalty function. The solutions of both subproblems are then combined to form the search direction, in such a way that it yields a reduction in the local model of constraint violation that is proportional to the reduction attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for constrained optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under reasonable assumptions and numerical results are presented for a large set of test problems.In the second part of this dissertation, two matrix-free methods are presented for approximately solving exact penalty subproblems of large scale. The first approach is a novel iterative re-weighting algorithm (IRWA), which iteratively minimizes quadratic models of relaxed subproblems while simultaneously updating a relaxation vector. The second approach recasts the subproblem into a linearly constrained nonsmooth optimization problem and then applies alternating direction augmented Lagrangian (ADAL) technology to solve it. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions, which can be performed matrix-free. Both algorithms are proved to be globally convergent under loose assumptions, and each requires at most O(1/ε2)O(1/\varepsilon^2) iterations to reach ε\varepsilon-optimality of the objective function. Numerical experiments exhibit the ability of both algorithms to efficiently find inexact solutions. Moreover, in certain cases, IRWA is shown to be more reliable than ADAL. In the final part of this dissertation, we focus on the design of the penalty parameter updating strategy in penalty SQO methods for solving large-scale nonlinear optimization problems. As the most computationally demanding aspect of such an approach is the computation of the search direction during each iteration, we consider the use of matrix-free methods for solving the direction-finding subproblems within SQP methods. This allows for the acceptance of inexact subproblem solutions, which can significantly reduce overall computational costs. In addition, such a method can be plagued by poor behavior of the global convergence mechanism, for which we consider the use of an exact penalty function. To confront this issue, we propose a dynamic penalty parameter updating strategy to be employed within the subproblem solver in such a way that the resulting search direction predicts progress toward both feasibility and optimality. We present our penalty parameter updating strategy and prove that does not decrease the penalty parameter unnecessarily in the neighborhood of points satisfying certain common assumptions. We also discuss two matrix-free subproblem solvers in which our updating strategy can be readily incorporated

    Efficient Trust Region Methods for Nonconvex Optimization

    Get PDF
    For decades, a great deal of nonlinear optimization research has focused on modeling and solving convex problems. This has been due to the fact that convex objects typically represent satisfactory estimates of real-world phenomenon, and since convex objects have very nice mathematical properties that makes analyses of them relatively straightforward. However, this focus has been changing. In various important applications, such as large-scale data fitting and learning problems, researchers are starting to turn away from simple, convex models toward more challenging nonconvex models that better represent real-world behaviors and can offer more useful solutions.To contribute to this new focus on nonconvex optimization models, we discuss and present new techniques for solving nonconvex optimization problems that possess attractive theoretical and practical properties. First, we propose a trust region algorithm that, in the worst case, is able to drive the norm of the gradient of the objective function below a prescribed threshold of ϵ∈(0,∞)\epsilon \in (0,\infty) after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations, function evaluations, and derivative evaluations. This improves upon the O(ϵ−2)\mathcal{O}(\epsilon^{-2}) bound known to hold for some other trust region algorithms and matches the O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the ARC algorithm. Our algorithm, entitled TRACE, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations the algorithm requires to obtain an approximate second-order stationary point.The aforementioned algorithm is based on techniques that require an exact subproblem solution in every iteration. This is a reasonable assumption for small- to medium-scale problems, but is intractable for large-scale optimization. To address this issue, the second project of this thesis involves a proposal of a general \emph{inexact} framework, which contains a wide range of algorithms with optimal complexity bounds, through defining a novel primal-dual subproblem and a set of loose conditions for an inexact solution of it. The proposed framework enjoys the same worst-case iteration complexity bounds for locating approximate first- and second-order stationary points as \RACE. However, it does not require one to solve subproblems exactly. In addition, the framework allows one to use inexact Newton steps whenever possible, a feature which allows the algorithm to use Hessian matrix-free approaches such as the \emph{conjugate gradient} method. This improves the practical performance of the algorithm, as our numerical experiments show.We close by proposing a globally convergent trust funnel algorithm for equality constrained optimization. The proposed algorithm, under some standard assumptions, is able to find a relative first-order stationary point after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations. This matches the complexity bound of the recently proposed Short-Step ARC algorithm. Our proposed algorithm uses the step decomposition and feasibility control mechanism of a trust funnel algorithm, but incorporates ideas from our TRACE framework in order to achieve good complexity bounds
    • …
    corecore