348 research outputs found
Convergence analysis of an Inexact Infeasible Interior Point method for Semidefinite Programming
In this paper we present an extension to SDP of the well known infeasible Interior Point method for linear programming of Kojima,Megiddo and Mizuno (A primal-dual infeasible-interior-point algorithm for Linear Programming, Math. Progr., 1993). The extension developed here allows the use of inexact search directions; i.e., the linear systems defining the search directions can be solved with an accuracy that increases as the solution is approached. A convergence analysis is carried out and the global convergence of the method is prove
An infeasible interior-point method for the -matrix linear complementarityā āproblem based on a trigonometric kernel function with full-Newtonā āstep
An infeasible interior-point algorithm for solving theā
ā-matrix linear complementarity problem based on a kernelā
āfunction with trigonometric barrier term is analyzedā. āEach (main)ā
āiteration of the algorithm consists of a feasibility step andā
āseveral centrality stepsā, āwhose feasibility step is induced by aā
ātrigonometric kernel functionā. āThe complexity result coincides withā
āthe best result for infeasible interior-point methods forā
ā-matrix linear complementarity problem
Optimization and Applications
Proceedings of a workshop devoted to optimization problems, their theory and resolution, and above all applications of them. The topics covered existence and stability of solutions; design, analysis, development and implementation of algorithms; applications in mechanics, telecommunications, medicine, operations research
An Interior Point-Proximal Method of Multipliers for Convex Quadratic Programming
In this paper we combine an infeasible Interior Point Method (IPM) with the
Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is
interpreted as a primal-dual regularized IPM, suitable for solving linearly
constrained convex quadratic programming problems. We apply few iterations of
the interior point method to each sub-problem of the proximal method of
multipliers. Once a satisfactory solution of the PMM sub-problem is found, we
update the PMM parameters, form a new IPM neighbourhood and repeat this
process. Given this framework, we prove polynomial complexity of the algorithm,
under standard assumptions. To our knowledge, this is the first polynomial
complexity result for a primal-dual regularized IPM. The algorithm is guided by
the use of a single penalty parameter; that of the logarithmic barrier. In
other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as
well as the strict convexity of the PMM sub-problems. The updates of the
penalty parameter are controlled by IPM, and hence are well-tuned, and do not
depend on the problem solved. Furthermore, we study the behavior of the method
when it is applied to an infeasible problem, and identify a necessary condition
for infeasibility. The latter is used to construct an infeasibility detection
mechanism. Subsequently, we provide a robust implementation of the presented
algorithm and test it over a set of small to large scale linear and convex
quadratic programming problems. The numerical results demonstrate the benefits
of using regularization in IPMs as well as the reliability of the method
Primal-dual interior-point algorithms for linear programs with many inequality constraints
Linear programs (LPs) are one of the most basic and important classes of constrained optimization problems, involving the optimization of linear objective functions over sets defined by linear equality and inequality constraints. LPs have applications to a broad range of problems in engineering and operations research, and often arise as subproblems for algorithms that solve more complex optimization problems.
``Unbalanced'' inequality-constrained LPs with many more inequality constraints than variables are an important subclass of LPs. Under a basic non-degeneracy assumption, only a small number of the constraints can be active at the solution--it is only this active set that is critical to the problem description. On the other hand, the additional constraints make the problem harder to solve. While modern ``interior-point'' algorithms have become recognized as some of the best methods for solving large-scale LPs, they may not be recommended for unbalanced problems, because their per-iteration work does not scale well with the number of constraints.
In this dissertation, we investigate "constraint-reduced'' interior-point algorithms designed to efficiently solve unbalanced LPs. At each iteration, these methods construct search directions based only on a small working set of constraints, while ignoring the rest. In this way, they significantly reduce their per-iteration work and, hopefully, their overall running time.
In particular, we focus on constraint-reduction methods for the highly efficient primal-dual interior-point (PDIP) algorithms. We propose and analyze a convergent constraint-reduced variant of Mehrotra's predictor-corrector PDIP algorithm, the algorithm implemented in virtually every interior-point software package for linear (and convex-conic) programming. We prove global and local quadratic convergence of this algorithm under a very general class of constraint selection rules and under minimal assumptions. We also propose and analyze two regularized constraint-reduced PDIP algorithms (with similar convergence properties) designed to deal directly with a type of degeneracy that constraint-reduced interior-point algorithms are often subject to. Prior schemes for dealing with this degeneracy could end up negating the benefit of constraint-reduction. Finally, we investigate the performance of our algorithms by applying them to several test and application problems, and show that our algorithms often outperform alternative approaches
Dynamic Non-Diagonal Regularization in Interior Point Methods for Linear and Convex Quadratic Programming
In this paper, we present a dynamic non-diagonal regularization for interior
point methods. The non-diagonal aspect of this regularization is implicit,
since all the off-diagonal elements of the regularization matrices are
cancelled out by those elements present in the Newton system, which do not
contribute important information in the computation of the Newton direction.
Such a regularization has multiple goals. The obvious one is to improve the
spectral properties of the Newton system solved at each iteration of the
interior point method. On the other hand, the regularization matrices introduce
sparsity to the aforementioned linear system, allowing for more efficient
factorizations. We also propose a rule for tuning the regularization
dynamically based on the properties of the problem, such that sufficiently
large eigenvalues of the non-regularized system are perturbed insignificantly.
This alleviates the need of finding specific regularization values through
experimentation, which is the most common approach in literature. We provide
perturbation bounds for the eigenvalues of the non-regularized system matrix
and then discuss the spectral properties of the regularized matrix. Finally, we
demonstrate the efficiency of the method applied to solve standard small and
medium-scale linear and convex quadratic programming test problems
- ā¦