869,876 research outputs found
Computing Optimal Experimental Designs via Interior Point Method
In this paper, we study optimal experimental design problems with a broad
class of smooth convex optimality criteria, including the classical A-, D- and
p th mean criterion. In particular, we propose an interior point (IP) method
for them and establish its global convergence. Furthermore, by exploiting the
structure of the Hessian matrix of the aforementioned optimality criteria, we
derive an explicit formula for computing its rank. Using this result, we then
show that the Newton direction arising in the IP method can be computed
efficiently via Sherman-Morrison-Woodbury formula when the size of the moment
matrix is small relative to the sample size. Finally, we compare our IP method
with the widely used multiplicative algorithm introduced by Silvey et al. [29].
The computational results show that the IP method generally outperforms the
multiplicative algorithm both in speed and solution quality
Distributed Interior-point Method for Loosely Coupled Problems
In this paper, we put forth distributed algorithms for solving loosely
coupled unconstrained and constrained optimization problems. Such problems are
usually solved using algorithms that are based on a combination of
decomposition and first order methods. These algorithms are commonly very slow
and require many iterations to converge. In order to alleviate this issue, we
propose algorithms that combine the Newton and interior-point methods with
proximal splitting methods for solving such problems. Particularly, the
algorithm for solving unconstrained loosely coupled problems, is based on
Newton's method and utilizes proximal splitting to distribute the computations
for calculating the Newton step at each iteration. A combination of this
algorithm and the interior-point method is then used to introduce a distributed
algorithm for solving constrained loosely coupled problems. We also provide
guidelines on how to implement the proposed methods efficiently and briefly
discuss the properties of the resulting solutions.Comment: Submitted to the 19th IFAC World Congress 201
A Quantum Interior Point Method for LPs and SDPs
We present a quantum interior point method with worst case running time
for
SDPs and for LPs, where the output of our algorithm is a pair of matrices
that are -optimal -approximate SDP solutions. The factor
is at most for SDPs and for LP's, and is
an upper bound on the condition number of the intermediate solution matrices.
For the case where the intermediate matrices for the interior point method are
well conditioned, our method provides a polynomial speedup over the best known
classical SDP solvers and interior point based LP solvers, which have a worst
case running time of and respectively. Our results
build upon recently developed techniques for quantum linear algebra and pave
the way for the development of quantum algorithms for a variety of applications
in optimization and machine learning.Comment: 32 page
An interior-point method for mpecs based on strictly feasible relaxations.
An interior-point method for solving mathematical programs with equilibrium constraints (MPECs) is proposed. At each iteration of the algorithm, a single primaldual step is computed from each subproblem of a sequence. Each subproblem is defined as a relaxation of the MPEC with a nonempty strictly feasible region. In contrast to previous approaches, the proposed relaxation scheme preserves the nonempty strict feasibility of each subproblem even in the limit. Local and superlinear convergence of the algorithm is proved even with a less restrictive strict complementarity condition than the standard one. Moreover, mechanisms for inducing global convergence in practice are proposed. Numerical results on the MacMPEC test problem set demonstrate the fast-local convergence properties of the algorithm
Modified Interior-Point Method for Large-and-Sparse Low-Rank Semidefinite Programs
Semidefinite programs (SDPs) are powerful theoretical tools that have been
studied for over two decades, but their practical use remains limited due to
computational difficulties in solving large-scale, realistic-sized problems. In
this paper, we describe a modified interior-point method for the efficient
solution of large-and-sparse low-rank SDPs, which finds applications in graph
theory, approximation theory, control theory, sum-of-squares, etc. Given that
the problem data is large-and-sparse, conjugate gradients (CG) can be used to
avoid forming, storing, and factoring the large and fully-dense interior-point
Hessian matrix, but the resulting convergence rate is usually slow due to
ill-conditioning. Our central insight is that, for a rank-, size- SDP,
the Hessian matrix is ill-conditioned only due to a rank- perturbation,
which can be explicitly computed using a size- eigendecomposition. We
construct a preconditioner to "correct" the low-rank perturbation, thereby
allowing preconditioned CG to solve the Hessian equation in a few tens of
iterations. This modification is incorporated within SeDuMi, and used to reduce
the solution time and memory requirements of large-scale matrix-completion
problems by several orders of magnitude.Comment: 8 pages, 2 figure
A distributed primal-dual interior-point method for loosely coupled problems using ADMM
In this paper we propose an efficient distributed algorithm for solving
loosely coupled convex optimization problems. The algorithm is based on a
primal-dual interior-point method in which we use the alternating direction
method of multipliers (ADMM) to compute the primal-dual directions at each
iteration of the method. This enables us to join the exceptional convergence
properties of primal-dual interior-point methods with the remarkable
parallelizability of ADMM. The resulting algorithm has superior computational
properties with respect to ADMM directly applied to our problem. The amount of
computations that needs to be conducted by each computing agent is far less. In
particular, the updates for all variables can be expressed in closed form,
irrespective of the type of optimization problem. The most expensive
computational burden of the algorithm occur in the updates of the primal
variables and can be precomputed in each iteration of the interior-point
method. We verify and compare our method to ADMM in numerical experiments.Comment: extended version, 50 pages, 9 figure
A Warm-start Interior-point Method for Predictive Control
In predictive control, a quadratic program (QP) needs to be solved at each sampling instant. We present a new warm-start strategy to solve a QP with an interior-point method whose data is slightly perturbed from the previous QP. In this strategy, an initial guess of the unknown variables in the perturbed problem is determined from the computed solution of the previous problem. We demonstrate the effectiveness of our warm-start strategy to a number of online benchmark problems. Numerical results indicate that the proposed technique depends upon the size of perturbation and it leads to a reduction of 30–74% in floating point operations compared to a cold-start interior-point method
- …
