147,873 research outputs found
Fast interior point solution of quadratic programming problems arising from PDE-constrained optimization
Interior point methods provide an attractive class of approaches for solving linear, quadratic and nonlinear programming problems, due to their excellent efficiency and wide applicability. In this paper, we consider PDE-constrained optimization problems with bound constraints on the state and control variables, and their representation on the discrete level as quadratic programming problems. To tackle complex problems and achieve high accuracy in the solution, one is required to solve matrix systems of huge scale resulting from Newton iteration, and hence fast and robust methods for these systems are required. We present preconditioned iterative techniques for solving a number of these problems using Krylov subspace methods, considering in what circumstances one may predict rapid convergence of the solvers in theory, as well as the solutions observed from practical computations
A new perspective on the complexity of interior point methods for linear programming
In a dynamical systems paradigm, many optimization algorithms are equivalent to applying forward Euler method to the system of ordinary differential equations defined by the vector field of the search directions. Thus the stiffness of such vector fields will play an essential role in the complexity of these methods. We first exemplify this point with a theoretical result for general linesearch methods for unconstrained optimization, which we further employ to investigating the complexity of a primal short-step path-following interior point method for linear programming. Our analysis involves showing that the Newton vector field associated to the primal logarithmic barrier is nonstiff in a sufficiently small and shrinking neighbourhood of its minimizer. Thus, by confining the iterates to these neighbourhoods of the primal central path, our algorithm has a nonstiff vector field of search directions, and we can give a worst-case bound on its iteration complexity. Furthermore, due to the generality of our vector field setting, we can perform a similar (global) iteration complexity analysis when the Newton direction of the interior point method is computed only approximately, using some direct method for solving linear systems of equations
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
Quasi-Newton approaches to Interior Point Methods for quadratic problems
Interior Point Methods (IPM) rely on the Newton method for solving systems of
nonlinear equations. Solving the linear systems which arise from this approach
is the most computationally expensive task of an interior point iteration. If,
due to problem's inner structure, there are special techniques for efficiently
solving linear systems, IPMs enjoy fast convergence and are able to solve large
scale optimization problems. It is tempting to try to replace the Newton method
by quasi-Newton methods. Quasi-Newton approaches to IPMs either are built to
approximate the Lagrangian function for nonlinear programming problems or
provide an inexpensive preconditioner. In this work we study the impact of
using quasi-Newton methods applied directly to the nonlinear system of
equations for general quadratic programming problems. The cost of each
iteration can be compared to the cost of computing correctors in a usual
interior point iteration. Numerical experiments show that the new approach is
able to reduce the overall number of matrix factorizations and is suitable for
a matrix-free implementation
Adaptive Non-myopic Quantizer Design for Target Tracking in Wireless Sensor Networks
In this paper, we investigate the problem of nonmyopic (multi-step ahead)
quantizer design for target tracking using a wireless sensor network. Adopting
the alternative conditional posterior Cramer-Rao lower bound (A-CPCRLB) as the
optimization metric, we theoretically show that this problem can be temporally
decomposed over a certain time window. Based on sequential Monte-Carlo methods
for tracking, i.e., particle filters, we design the local quantizer adaptively
by solving a particlebased non-linear optimization problem which is well suited
for the use of interior-point algorithm and easily embedded in the filtering
process. Simulation results are provided to illustrate the effectiveness of our
proposed approach.Comment: Submitted to 2013 Asilomar Conference on Signals, Systems, and
Computer
Custom optimization algorithms for efficient hardware implementation
The focus is on real-time optimal decision making with application in advanced control
systems. These computationally intensive schemes, which involve the repeated solution of
(convex) optimization problems within a sampling interval, require more efficient computational
methods than currently available for extending their application to highly dynamical
systems and setups with resource-constrained embedded computing platforms.
A range of techniques are proposed to exploit synergies between digital hardware, numerical
analysis and algorithm design. These techniques build on top of parameterisable
hardware code generation tools that generate VHDL code describing custom computing
architectures for interior-point methods and a range of first-order constrained optimization
methods. Since memory limitations are often important in embedded implementations we
develop a custom storage scheme for KKT matrices arising in interior-point methods for
control, which reduces memory requirements significantly and prevents I/O bandwidth
limitations from affecting the performance in our implementations. To take advantage of
the trend towards parallel computing architectures and to exploit the special characteristics
of our custom architectures we propose several high-level parallel optimal control
schemes that can reduce computation time. A novel optimization formulation was devised
for reducing the computational effort in solving certain problems independent of the computing
platform used. In order to be able to solve optimization problems in fixed-point
arithmetic, which is significantly more resource-efficient than floating-point, tailored linear
algebra algorithms were developed for solving the linear systems that form the computational
bottleneck in many optimization methods. These methods come with guarantees
for reliable operation. We also provide finite-precision error analysis for fixed-point implementations
of first-order methods that can be used to minimize the use of resources while
meeting accuracy specifications. The suggested techniques are demonstrated on several
practical examples, including a hardware-in-the-loop setup for optimization-based control
of a large airliner.Open Acces
Adaptive solution of truss layout optimization problems with global stability constraints
Truss layout optimization problems with global stability constraints are nonlinear and nonconvex and hence very challenging to solve, particularly when problems become large. In this paper, a relaxation of the nonlinear problem is modelled as a (linear) semidefinite programming problem for which we describe an efficient primal-dual interior point method capable of solving problems of a scale that would be prohibitively expensive to solve using standard methods. The proposed method exploits the sparse structure and low-rank property of the stiffness matrices involved, greatly reducing the computational effort required to process the associated linear systems. Moreover, an adaptive ‘member adding’ technique is employed which involves solving a sequence of much smaller problems, with the process ultimately converging on the solution for the original problem. Finally, a warm-start strategy is used when successive problems display sufficient similarity, leading to fewer interior point iterations being required. We perform several numerical experiments to show the efficiency of the method and discuss the status of the solutions obtained
- …