397 research outputs found
Error bound and convergence analysis of matrix splitting algorithms for the affine variational inequality problem
Cover title.Includes bibliographical references (p. 13-15).Supported by the National Science Foundation. NSF-DDM-8903385 Supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Supported by a grant from the Science and Engineering Research Board of McMaster University.by Paul Tseng and Zhi-Quan Luo
Two new weak constraint qualifications and applications
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.Facultad de Ciencias ExactasDepartamento de Matemátic
Global Optimization for Value Function Approximation
Existing value function approximation methods have been successfully used in
many applications, but they often lack useful a priori error bounds. We propose
a new approximate bilinear programming formulation of value function
approximation, which employs global optimization. The formulation provides
strong a priori guarantees on both robust and expected policy loss by
minimizing specific norms of the Bellman residual. Solving a bilinear program
optimally is NP-hard, but this is unavoidable because the Bellman-residual
minimization itself is NP-hard. We describe and analyze both optimal and
approximate algorithms for solving bilinear programs. The analysis shows that
this algorithm offers a convergent generalization of approximate policy
iteration. We also briefly analyze the behavior of bilinear programming
algorithms under incomplete samples. Finally, we demonstrate that the proposed
approach can consistently minimize the Bellman residual on simple benchmark
problems
Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications
In computer vision, many problems such as image segmentation, pixel
labelling, and scene parsing can be formulated as binary quadratic programs
(BQPs). For submodular problems, cuts based methods can be employed to
efficiently solve large-scale problems. However, general nonsubmodular problems
are significantly more challenging to solve. Finding a solution when the
problem is of large size to be of practical interest, however, typically
requires relaxation. Two standard relaxation methods are widely used for
solving general BQPs--spectral methods and semidefinite programming (SDP), each
with their own advantages and disadvantages. Spectral relaxation is simple and
easy to implement, but its bound is loose. Semidefinite relaxation has a
tighter bound, but its computational complexity is high, especially for large
scale problems. In this work, we present a new SDP formulation for BQPs, with
two desirable properties. First, it has a similar relaxation bound to
conventional SDP formulations. Second, compared with conventional SDP methods,
the new SDP formulation leads to a significantly more efficient and scalable
dual optimization approach, which has the same degree of complexity as spectral
methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton
methods, for the dual problem. Both of them are significantly more efficiently
than standard interior-point methods. In practice, the smoothing Newton solver
is faster than the quasi-Newton solver for dense or medium-sized problems,
while the quasi-Newton solver is preferable for large sparse/structured
problems. Our experiments on a few computer vision applications including
clustering, image segmentation, co-segmentation and registration show the
potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern
Analysis and Machine Intelligenc
A Newton-bracketing method for a simple conic optimization problem
For the Lagrangian-DNN relaxation of quadratic optimization problems (QOPs),
we propose a Newton-bracketing method to improve the performance of the
bisection-projection method implemented in BBCPOP [to appear in ACM Tran.
Softw., 2019]. The relaxation problem is converted into the problem of finding
the largest zero of a continuously differentiable (except at )
convex function such that if
and otherwise. In theory, the method generates lower
and upper bounds of both converging to . Their convergence is
quadratic if the right derivative of at is positive. Accurate
computation of is necessary for the robustness of the method, but it is
difficult to achieve in practice. As an alternative, we present a
secant-bracketing method. We demonstrate that the method improves the quality
of the lower bounds obtained by BBCPOP and SDPNAL+ for binary QOP instances
from BIQMAC. Moreover, new lower bounds for the unknown optimal values of large
scale QAP instances from QAPLIB are reported.Comment: 19 pages, 2 figure
- …