4,126 research outputs found
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Successive Convex Approximation Algorithms for Sparse Signal Estimation with Nonconvex Regularizations
In this paper, we propose a successive convex approximation framework for
sparse optimization where the nonsmooth regularization function in the
objective function is nonconvex and it can be written as the difference of two
convex functions. The proposed framework is based on a nontrivial combination
of the majorization-minimization framework and the successive convex
approximation framework proposed in literature for a convex regularization
function. The proposed framework has several attractive features, namely, i)
flexibility, as different choices of the approximate function lead to different
type of algorithms; ii) fast convergence, as the problem structure can be
better exploited by a proper choice of the approximate function and the
stepsize is calculated by the line search; iii) low complexity, as the
approximate function is convex and the line search scheme is carried out over a
differentiable function; iv) guaranteed convergence to a stationary point. We
demonstrate these features by two example applications in subspace learning,
namely, the network anomaly detection problem and the sparse subspace
clustering problem. Customizing the proposed framework by adopting the
best-response type approximation, we obtain soft-thresholding with exact line
search algorithms for which all elements of the unknown parameter are updated
in parallel according to closed-form expressions. The attractive features of
the proposed algorithms are illustrated numerically.Comment: submitted to IEEE Journal of Selected Topics in Signal Processing,
special issue in Robust Subspace Learnin
- …