30 research outputs found

    Advances in Interior Point Methods for Large-Scale Linear Programming

    Get PDF
    This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear programming. Both are based on the concept of symmetric neighbourhood as the driving tool for the analysis of the good performance of some practical algorithms. The symmetric neighbourhood adds explicit upper bounds on the complementarity pairs, besides the lower bound already present in the common N−1 neighbourhood. This allows the algorithm to keep under control the spread among complementarity pairs and reduce it with the barrier parameter μ. We show that a long-step feasible algorithm based on this neighbourhood is globally convergent and converges in O(nL) iterations. The use of the symmetric neighbourhood and the recent theoretical under- standing of the behaviour of Mehrotra’s corrector direction motivate the introduction of a weighting mechanism that can be applied to any corrector direction, whether originating from Mehrotra’s predictor–corrector algorithm or as part of the multiple centrality correctors technique. Such modification in the way a correction is applied aims to ensure that any computed search direction can positively contribute to a successful iteration by increasing the overall stepsize, thus avoid- ing the case that a corrector is rejected. The usefulness of the weighting strategy is documented through complete numerical experiments on various sets of publicly available test problems. The implementation within the hopdm interior point code shows remarkable time savings for large-scale linear programming problems. The second technique develops an efficient way of constructing a starting point for structured large-scale stochastic linear programs. We generate a computation- ally viable warm-start point by solving to low accuracy a stochastic problem of much smaller dimension. The reduced problem is the deterministic equivalent program corresponding to an event tree composed of a restricted number of scenarios. The solution to the reduced problem is then expanded to the size of the problem instance, and used to initialise the interior point algorithm. We present theoretical conditions that the warm-start iterate has to satisfy in order to be successful. We implemented this technique in both the hopdm and the oops frameworks, and its performance is verified through a series of tests on problem instances coming from various stochastic programming sources

    Support vector machine classifiers for large data sets.

    Full text link

    Active-set prediction for interior point methods

    Get PDF
    This research studies how to efficiently predict optimal active constraints of an inequality constrained optimization problem, in the context of Interior Point Methods (IPMs). We propose a framework based on shifting/perturbing the inequality constraints of the problem. Despite being a class of powerful tools for solving Linear Programming (LP) problems, IPMs are well-known to encounter difficulties with active-set prediction due essentially to their construction. When applied to an inequality constrained optimization problem, IPMs generate iterates that belong to the interior of the set determined by the constraints, thus avoiding/ignoring the combinatorial aspect of the solution. This comes at the cost of difficulty in predicting the optimal active constraints that would enable termination, as well as increasing ill-conditioning of the solution process. We show that, existing techniques for active-set prediction, however, suffer from difficulties in making an accurate prediction at the early stage of the iterative process of IPMs; when these techniques are ready to yield an accurate prediction towards the end of a run, as the iterates approach the solution set, the IPMs have to solve increasingly ill-conditioned and hence difficult, subproblems. To address this challenging question, we propose the use of controlled perturbations. Namely, in the context of LP problems, we consider perturbing the inequality constraints (by a small amount) so as to enlarge the feasible set. We show that if the perturbations are chosen judiciously, the solution of the original problem lies on or close to the central path of the perturbed problem. We solve the resulting perturbed problem(s) using a path-following IPM while predicting on the way the active set of the original LP problem; we find that our approach is able to accurately predict the optimal active set of the original problem before the duality gap for the perturbed problem gets too small. Furthermore, depending on problem conditioning, this prediction can happen sooner than predicting the active set for the perturbed problem or for the original one if no perturbations are used. Proof-of-concept algorithms are presented and encouraging preliminary numerical experience is also reported when comparing activity prediction for the perturbed and unperturbed problem formulations. We also extend the idea of using controlled perturbations to enhance the capabilities of optimal active-set prediction for IPMs for convex Quadratic Programming (QP) problems. QP problems share many properties of LP, and based on these properties, some results require more care; furthermore, encouraging preliminary numerical experience is also presented for the QP case

    Second order strategies for complementarity problems

    Get PDF
    Orientadores: Sandra Augusta Santos, Roberto AndreaniTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação CientificaResumo: Neste trabalho reformulamos o problema de complementaridade não linear generalizado (GNCP) em cones poliedrais como um sistema não linear com restrição de não negatividade em algumas variáveis, e trabalhamos na resolução de tal reformulação por meio de estratégias de pontos interiores. Em particular, definimos dois algoritmos e provamos a convergência local de tais algoritmos sob hipóteses usuais. O primeiro algoritmo é baseado no método de Newton, e o segundo, no método tensorial de Chebyshev. O algoritmo baseado no método de Chebyshev pode ser visto como um método do tipo preditor-corretor. Tal algoritmo, quando aplicado a problemas em que as funções envolvidas são afins, e com escolhas adequadas dos parâmetros, torna-se o bem conhecido algoritmo preditor-corretor de Mehrotra. Também apresentamos resultados numéricos que ilustram a competitividade de ambas as propostas.Abstract: In this work we reformulate the generalized nonlinear complementarity problem (GNCP) in polyhedral cones as a nonlinear system with nonnegativity in some variables and propose the resolution of such reformulation through interior-point methods. In particular we define two algorithms and prove the local convergence of these algorithms under standard assumptions. The first algorithm is based on Newton's method and the second, on the Chebyshev's tensorial method. The algorithm based on Chebyshev's method may be considered a predictor-corrector one. Such algorithm, when applied to problems for which the functions are affine, and the parameters are properly chosen, turns into the well-known Mehrotra's predictor corrector algorithm. We also present numerical results that illustrate the competitiveness of both proposals.DoutoradoOtimizaçãoDoutor em Matemática Aplicad

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Une synthèse sur les méthodes du point intérieur

    Get PDF
    En 1984, Karmarkar a publié un algorithme de type point intérieur pour la programmation linéaire. Il a affirmé qu'en plus d'être de complexité polynomiale, il est plus efficace que le simplexe surtout pour des problèmes de grande taille. Ainsi une recherche s'est déclenchée dans les méthodes de point intérieur et a donné comme résultat une grande variété d'algorithmes de ce type qui peuvent être classés en quatre catégories de méthodes : méthodes projectives, méthodes affinées, méthodes du potentiel et méthodes de la trajectoire centrale. Dans ce travail, nous allons présenter une synthèse de ces méthodes incluant les derniers développements dans ce domaine

    Regularized interior point methods for convex programming

    Get PDF
    Interior point methods (IPMs) constitute one of the most important classes of optimization methods, due to their unparalleled robustness, as well as their generality. It is well known that a very large class of convex optimization problems can be solved by means of IPMs, in a polynomial number of iterations. As a result, IPMs are being used to solve problems arising in a plethora of fields, ranging from physics, engineering, and mathematics, to the social sciences, to name just a few. Nevertheless, there remain certain numerical issues that have not yet been addressed. More specifically, the main drawback of IPMs is that the linear algebra task involved is inherently ill-conditioned. At every iteration of the method, one has to solve a (possibly large-scale) linear system of equations (also known as the Newton system), the conditioning of which deteriorates as the IPM converges to an optimal solution. If these linear systems are of very large dimension, prohibiting the use of direct factorization, then iterative schemes may have to be employed. Such schemes are significantly affected by the inherent ill-conditioning within IPMs. One common approach for improving the aforementioned numerical issues, is to employ regularized IPM variants. Such methods tend to be more robust and numerically stable in practice. Over the last two decades, the theory behind regularization has been significantly advanced. In particular, it is well known that regularized IPM variants can be interpreted as hybrid approaches combining IPMs with the proximal point method. However, it remained unknown whether regularized IPMs retain the polynomial complexity of their non-regularized counterparts. Furthermore, the very important issue of tuning the regularization parameters appropriately, which is also crucial in augmented Lagrangian methods, was not addressed. In this thesis, we focus on addressing the previous open questions, as well as on creating robust implementations that solve various convex optimization problems. We discuss in detail the effect of regularization, and derive two different regularization strategies; one based on the proximal method of multipliers, and another one based on a Bregman proximal point method. The latter tends to be more efficient, while the former is more robust and has better convergence guarantees. In addition, we discuss the use of iterative linear algebra within the presented algorithms, by proposing some general purpose preconditioning strategies (used to accelerate the iterative schemes) that take advantage of the regularized nature of the systems being solved. In Chapter 2 we present a dynamic non-diagonal regularization for IPMs. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization, which can be interpreted as the application of a Bregman proximal point method, has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each IPM iteration. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix, and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems. In Chapter 3 we combine an IPM with the proximal method of multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strong convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems, demonstrating the benefits of using regularization in IPMs as well as the reliability of the approach. In Chapter 4 we extend IP-PMM to the case of linear semi-definite programming (SDP) problems. In particular, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM. In Chapter 5 we present general-purpose preconditioners for regularized Newton systems arising within regularized interior point methods. We discuss positive definite preconditioners, suitable for iterative schemes like the conjugate gradient (CG), or the minimal residual (MINRES) method. We study the spectral properties of the preconditioned systems, and discuss the use of each presented approach, depending on the properties of the problem under consideration. All preconditioning strategies are numerically tested on various medium- to large-scale problems coming from standard test sets, as well as problems arising from partial differential equation (PDE) optimization. In Chapter 6 we apply specialized regularized IPM variants to problems arising from portfolio optimization, machine learning, image processing, and statistics. Such problems are usually solved by specialized first-order approaches. The efficiency of the proposed regularized IPM variants is confirmed by comparing them against problem-specific state--of--the--art first-order alternatives given in the literature. Finally, in Chapter 7 we present some conclusions as well as open questions, and possible future research directions
    corecore