16,342 research outputs found
Bounded perturbation resilience of projected scaled gradient methods
We investigate projected scaled gradient (PSG) methods for convex
minimization problems. These methods perform a descent step along a diagonally
scaled gradient direction followed by a feasibility regaining step via
orthogonal projection onto the constraint set. This constitutes a generalized
algorithmic structure that encompasses as special cases the gradient projection
method, the projected Newton method, the projected Landweber-type methods and
the generalized Expectation-Maximization (EM)-type methods. We prove the
convergence of the PSG methods in the presence of bounded perturbations. This
resilience to bounded perturbations is relevant to the ability to apply the
recently developed superiorization methodology to PSG methods, in particular to
the EM algorithm.Comment: Computational Optimization and Applications, accepted for publicatio
Stochastic Frank-Wolfe for Composite Convex Minimization
A broad class of convex optimization problems can be formulated as a
semidefinite program (SDP), minimization of a convex function over the
positive-semidefinite cone subject to some affine constraints. The majority of
classical SDP solvers are designed for the deterministic setting where problem
data is readily available. In this setting, generalized conditional gradient
methods (aka Frank-Wolfe-type methods) provide scalable solutions by leveraging
the so-called linear minimization oracle instead of the projection onto the
semidefinite cone. Most problems in machine learning and modern engineering
applications, however, contain some degree of stochasticity. In this work, we
propose the first conditional-gradient-type method for solving stochastic
optimization problems under affine constraints. Our method guarantees
convergence rate in expectation on the objective
residual and on the feasibility gap
The Convergence Guarantees of a Non-convex Approach for Sparse Recovery
In the area of sparse recovery, numerous researches hint that non-convex
penalties might induce better sparsity than convex ones, but up until now those
corresponding non-convex algorithms lack convergence guarantees from the
initial solution to the global optimum. This paper aims to provide performance
guarantees of a non-convex approach for sparse recovery. Specifically, the
concept of weak convexity is incorporated into a class of sparsity-inducing
penalties to characterize the non-convexity. Borrowing the idea of the
projected subgradient method, an algorithm is proposed to solve the non-convex
optimization problem. In addition, a uniform approximate projection is adopted
in the projection step to make this algorithm computationally tractable for
large scale problems. The convergence analysis is provided in the noisy
scenario. It is shown that if the non-convexity of the penalty is below a
threshold (which is in inverse proportion to the distance between the initial
solution and the sparse signal), the recovered solution has recovery error
linear in both the step size and the noise term. Numerical simulations are
implemented to test the performance of the proposed approach and verify the
theoretical analysis.Comment: 33 pages, 7 figure
- …