1,379 research outputs found
Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach
We consider the problem of minimizing a convex function over the intersection
of finitely many simple sets which are easy to project onto. This is an
important problem arising in various domains such as machine learning. The main
difficulty lies in finding the projection of a point in the intersection of
many sets. Existing approaches yield an infeasible point with an
iteration-complexity of for nonsmooth problems with no
guarantees on the in-feasibility. By reformulating the problem through exact
penalty functions, we derive first-order algorithms which not only guarantees
that the distance to the intersection is small but also improve the complexity
to and for smooth functions. For
composite and smooth problems, this is achieved through a saddle-point
reformulation where the proximal operators required by the primal-dual
algorithms can be computed in closed form. We illustrate the benefits of our
approach on a graph transduction problem and on graph matching
An Infeasible-Point Subgradient Method Using Adaptive Approximate Projections
We propose a new subgradient method for the minimization of nonsmooth convex
functions over a convex set. To speed up computations we use adaptive
approximate projections only requiring to move within a certain distance of the
exact projections (which decreases in the course of the algorithm). In
particular, the iterates in our method can be infeasible throughout the whole
procedure. Nevertheless, we provide conditions which ensure convergence to an
optimal feasible point under suitable assumptions. One convergence result deals
with step size sequences that are fixed a priori. Two other results handle
dynamic Polyak-type step sizes depending on a lower or upper estimate of the
optimal objective function value, respectively. Additionally, we briefly sketch
two applications: Optimization with convex chance constraints, and finding the
minimum l1-norm solution to an underdetermined linear system, an important
problem in Compressed Sensing.Comment: 36 pages, 3 figure
Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods
The convex feasibility problem (CFP) is at the core of the modeling of many
problems in various areas of science. Subgradient projection methods are
important tools for solving the CFP because they enable the use of subgradient
calculations instead of orthogonal projections onto the individual sets of the
problem. Working in a real Hilbert space, we show that the sequential
subgradient projection method is perturbation resilient. By this we mean that
under appropriate conditions the sequence generated by the method converges
weakly, and sometimes also strongly, to a point in the intersection of the
given subsets of the feasibility problem, despite certain perturbations which
are allowed in each iterative step. Unlike previous works on solving the convex
feasibility problem, the involved functions, which induce the feasibility
problem's subsets, need not be convex. Instead, we allow them to belong to a
wider and richer class of functions satisfying a weaker condition, that we call
"zero-convexity". This class, which is introduced and discussed here, holds a
promise to solve optimization problems in various areas, especially in
non-smooth and non-convex optimization. The relevance of this study to
approximate minimization and to the recent superiorization methodology for
constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio
Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems
In this paper we propose and analyze two dual methods based on inexact
gradient information and averaging that generate approximate primal solutions
for smooth convex optimization problems. The complicating constraints are moved
into the cost using the Lagrange multipliers. The dual problem is solved by
inexact first order methods based on approximate gradients and we prove
sublinear rate of convergence for these methods. In particular, we provide, for
the first time, estimates on the primal feasibility violation and primal and
dual suboptimality of the generated approximate primal and dual solutions.
Moreover, we solve approximately the inner problems with a parallel coordinate
descent algorithm and we show that it has linear convergence rate. In our
analysis we rely on the Lipschitz property of the dual function and inexact
dual gradients. Further, we apply these methods to distributed model predictive
control for network systems. By tightening the complicating constraints we are
also able to ensure the primal feasibility of the approximate solutions
generated by the proposed algorithms. We obtain a distributed control strategy
that has the following features: state and input constraints are satisfied,
stability of the plant is guaranteed, whilst the number of iterations for the
suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.Comment: 20 page, 4 figure
Accelerating two projection methods via perturbations with application to Intensity-Modulated Radiation Therapy
Constrained convex optimization problems arise naturally in many real-world
applications. One strategy to solve them in an approximate way is to translate
them into a sequence of convex feasibility problems via the recently developed
level set scheme and then solve each feasibility problem using projection
methods. However, if the problem is ill-conditioned, projection methods often
show zigzagging behavior and therefore converge slowly.
To address this issue, we exploit the bounded perturbation resilience of the
projection methods and introduce two new perturbations which avoid zigzagging
behavior. The first perturbation is in the spirit of -step methods and uses
gradient information from previous iterates. The second uses the approach of
surrogate constraint methods combined with relaxed, averaged projections.
We apply two different projection methods in the unperturbed version, as well
as the two perturbed versions, to linear feasibility problems along with
nonlinear optimization problems arising from intensity-modulated radiation
therapy (IMRT) treatment planning. We demonstrate that for all the considered
problems the perturbations can significantly accelerate the convergence of the
projection methods and hence the overall procedure of the level set scheme. For
the IMRT optimization problems the perturbed projection methods found an
approximate solution up to 4 times faster than the unperturbed methods while at
the same time achieving objective function values which were 0.5 to 5.1% lower.Comment: Accepted for publication in Applied Mathematics & Optimizatio
- …