989 research outputs found
A Unified Successive Pseudo-Convex Approximation Framework
In this paper, we propose a successive pseudo-convex approximation algorithm
to efficiently compute stationary points for a large class of possibly
nonconvex optimization problems. The stationary points are obtained by solving
a sequence of successively refined approximate problems, each of which is much
easier to solve than the original problem. To achieve convergence, the
approximate problem only needs to exhibit a weak form of convexity, namely,
pseudo-convexity. We show that the proposed framework not only includes as
special cases a number of existing methods, for example, the gradient method
and the Jacobi algorithm, but also leads to new algorithms which enjoy easier
implementation and faster convergence speed. We also propose a novel line
search method for nondifferentiable optimization problems, which is carried out
over a properly constructed differentiable function with the benefit of a
simplified implementation as compared to state-of-the-art line search
techniques that directly operate on the original nondifferentiable objective
function. The advantages of the proposed algorithm are shown, both
theoretically and numerically, by several example applications, namely, MIMO
broadcast channel capacity computation, energy efficiency maximization in
massive MIMO systems and LASSO in sparse signal recovery.Comment: submitted to IEEE Transactions on Signal Processing; original title:
A Novel Iterative Convex Approximation Metho
Constrained Global Optimization by Smoothing
This paper proposes a novel technique called "successive stochastic
smoothing" that optimizes nonsmooth and discontinuous functions while
considering various constraints. Our methodology enables local and global
optimization, making it a powerful tool for many applications. First, a
constrained problem is reduced to an unconstrained one by the exact nonsmooth
penalty function method, which does not assume the existence of the objective
function outside the feasible area and does not require the selection of the
penalty coefficient. This reduction is exact in the case of minimization of a
lower semicontinuous function under convex constraints. Then the resulting
objective function is sequentially smoothed by the kernel method starting from
relatively strong smoothing and with a gradually vanishing degree of smoothing.
The finite difference stochastic gradient descent with trajectory averaging
minimizes each smoothed function locally. Finite differences over stochastic
directions sampled from the kernel estimate the stochastic gradients of the
smoothed functions. We investigate the convergence rate of such stochastic
finite-difference method on convex optimization problems. The "successive
smoothing" algorithm uses the results of previous optimization runs to select
the starting point for optimizing a consecutive, less smoothed function.
Smoothing provides the "successive smoothing" method with some global
properties. We illustrate the performance of the "successive stochastic
smoothing" method on test-constrained optimization problems from the
literature.Comment: 17 pages, 1 tabl
- …