3,284 research outputs found
Dual versus Primal-Dual Interior-Point Methods for Linear and Conic Programming
Dual versus Primal-Dual Interior-Point Methods for Linear and Conic Programmin
Recommended from our members
Primal-dual variable neighborhood search for the simple plant-location problem
Copyright @ 2007 INFORMSThe variable neighborhood search metaheuristic is applied to the primal simple plant-location problem and to a reduced dual obtained by exploiting the complementary slackness conditions. This leads to (i) heuristic resolution of (metric) instances with uniform fixed costs, up to n = 15,000 users, and m = n potential locations for facilities with an error not exceeding 0.04%; (ii) exact solution of such instances with up to m = n = 7,000; and (iii) exact solutions of instances with variable fixed costs and up to m = n = 15, 000.This work is supported by NSERC Grant 105574-02; NSERC Grant OGP205041; and partly by the Serbian Ministry of Science, Project 1583
Continuous Multiclass Labeling Approaches and Algorithms
We study convex relaxations of the image labeling problem on a continuous
domain with regularizers based on metric interaction potentials. The generic
framework ensures existence of minimizers and covers a wide range of
relaxations of the originally combinatorial problem. We focus on two specific
relaxations that differ in flexibility and simplicity -- one can be used to
tightly relax any metric interaction potential, while the other one only covers
Euclidean metrics but requires less computational effort. For solving the
nonsmooth discretized problem, we propose a globally convergent
Douglas-Rachford scheme, and show that a sequence of dual iterates can be
recovered in order to provide a posteriori optimality bounds. In a quantitative
comparison to two other first-order methods, the approach shows competitive
performance on synthetical and real-world images. By combining the method with
an improved binarization technique for nonstandard potentials, we were able to
routinely recover discrete solutions within 1%--5% of the global optimum for
the combinatorial image labeling problem
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Primal-dual interior-point algorithms for linear programs with many inequality constraints
Linear programs (LPs) are one of the most basic and important classes of constrained optimization problems, involving the optimization of linear objective functions over sets defined by linear equality and inequality constraints. LPs have applications to a broad range of problems in engineering and operations research, and often arise as subproblems for algorithms that solve more complex optimization problems.
``Unbalanced'' inequality-constrained LPs with many more inequality constraints than variables are an important subclass of LPs. Under a basic non-degeneracy assumption, only a small number of the constraints can be active at the solution--it is only this active set that is critical to the problem description. On the other hand, the additional constraints make the problem harder to solve. While modern ``interior-point'' algorithms have become recognized as some of the best methods for solving large-scale LPs, they may not be recommended for unbalanced problems, because their per-iteration work does not scale well with the number of constraints.
In this dissertation, we investigate "constraint-reduced'' interior-point algorithms designed to efficiently solve unbalanced LPs. At each iteration, these methods construct search directions based only on a small working set of constraints, while ignoring the rest. In this way, they significantly reduce their per-iteration work and, hopefully, their overall running time.
In particular, we focus on constraint-reduction methods for the highly efficient primal-dual interior-point (PDIP) algorithms. We propose and analyze a convergent constraint-reduced variant of Mehrotra's predictor-corrector PDIP algorithm, the algorithm implemented in virtually every interior-point software package for linear (and convex-conic) programming. We prove global and local quadratic convergence of this algorithm under a very general class of constraint selection rules and under minimal assumptions. We also propose and analyze two regularized constraint-reduced PDIP algorithms (with similar convergence properties) designed to deal directly with a type of degeneracy that constraint-reduced interior-point algorithms are often subject to. Prior schemes for dealing with this degeneracy could end up negating the benefit of constraint-reduction. Finally, we investigate the performance of our algorithms by applying them to several test and application problems, and show that our algorithms often outperform alternative approaches
Variational optimization of second order density matrices for electronic structure calculation
The exponential growth of the dimension of the exact wavefunction with the size of a chemical system makes it impossible to compute chemical properties of large chemical systems exactly. A myriad of ab initio methods that use simpler mathematical objects to describe the system has thrived on this realization, among which the variational second order density matrix method. The aim of my thesis has been to evaluate the use of this method for chemistry and to identify the major theoretical and computational challenges that need to be overcome to make it successful for chemical applications.
The major theoretical challenges originate from the need for the second order density matrix to be N-representable: it must be derivable from an ensemble of N-electron states. Our calculations have pointed out major drawbacks of commonly used necessary N-representability conditions, such as incorrect dissociation into fractionally charged products and size-inconsistency. We have derived subspace energy constraints that fix these problems, albeit in an ad-hoc manner. Additionally, we have found that standard constraints on spin properties cause serious problems, such as false multiplet splitting and size-inconsistency. The subspace constraints relieve these problems as well, though only in the dissociation limit.
The major computational challenges originate from the method’s formulation as a vast semidefinite optimization problem. We have implemented and compared several algorithms that exploit the specific structure of the problem. Even so, their slow speed remains prohibitive. Both the second order methods and the zeroth order boundary point method that we tried performed quite similar, which suggests that the underlying problem responsible for their slow convergence, ill-conditioning due to the singularity of the optimal matrix, manifests itself in all these algorithms even though it is most explicit in the barrier method.
Significant progress in these theoretical and computational aspects is needed to make the variational second order density matrix method competitive to comparable wavefunction based methods
- …