4,192 research outputs found
DOLPHIn - Dictionary Learning for Phase Retrieval
We propose a new algorithm to learn a dictionary for reconstructing and
sparsely encoding signals from measurements without phase. Specifically, we
consider the task of estimating a two-dimensional image from squared-magnitude
measurements of a complex-valued linear transformation of the original image.
Several recent phase retrieval algorithms exploit underlying sparsity of the
unknown signal in order to improve recovery performance. In this work, we
consider such a sparse signal prior in the context of phase retrieval, when the
sparsifying dictionary is not known in advance. Our algorithm jointly
reconstructs the unknown signal - possibly corrupted by noise - and learns a
dictionary such that each patch of the estimated image can be sparsely
represented. Numerical experiments demonstrate that our approach can obtain
significantly better reconstructions for phase retrieval problems with noise
than methods that cannot exploit such "hidden" sparsity. Moreover, on the
theoretical side, we provide a convergence result for our method
Inexact Block Coordinate Descent Algorithms for Nonsmooth Nonconvex Optimization
In this paper, we propose an inexact block coordinate descent algorithm for
large-scale nonsmooth nonconvex optimization problems. At each iteration, a
particular block variable is selected and updated by inexactly solving the
original optimization problem with respect to that block variable. More
precisely, a local approximation of the original optimization problem is
solved. The proposed algorithm has several attractive features, namely, i) high
flexibility, as the approximation function only needs to be strictly convex and
it does not have to be a global upper bound of the original function; ii) fast
convergence, as the approximation function can be designed to exploit the
problem structure at hand and the stepsize is calculated by the line search;
iii) low complexity, as the approximation subproblems are much easier to solve
and the line search scheme is carried out over a properly constructed
differentiable function; iv) guaranteed convergence of a subsequence to a
stationary point, even when the objective function does not have a Lipschitz
continuous gradient. Interestingly, when the approximation subproblem is solved
by a descent algorithm, convergence of a subsequence to a stationary point is
still guaranteed even if the approximation subproblem is solved inexactly by
terminating the descent algorithm after a finite number of iterations. These
features make the proposed algorithm suitable for large-scale problems where
the dimension exceeds the memory and/or the processing capability of the
existing hardware. These features are also illustrated by several applications
in signal processing and machine learning, for instance, network anomaly
detection and phase retrieval
Non-convex Optimization for Machine Learning
A vast majority of machine learning algorithms train their models and perform
inference by solving optimization problems. In order to capture the learning
and prediction problems accurately, structural constraints such as sparsity or
low rank are frequently imposed or else the objective itself is designed to be
a non-convex function. This is especially true of algorithms that operate in
high-dimensional spaces or that train non-linear models such as tensor models
and deep networks.
The freedom to express the learning problem as a non-convex optimization
problem gives immense modeling power to the algorithm designer, but often such
problems are NP-hard to solve. A popular workaround to this has been to relax
non-convex problems to convex ones and use traditional methods to solve the
(convex) relaxed optimization problems. However this approach may be lossy and
nevertheless presents significant challenges for large scale optimization.
On the other hand, direct approaches to non-convex optimization have met with
resounding success in several domains and remain the methods of choice for the
practitioner, as they frequently outperform relaxation-based techniques -
popular heuristics include projected gradient descent and alternating
minimization. However, these are often poorly understood in terms of their
convergence and other properties.
This monograph presents a selection of recent advances that bridge a
long-standing gap in our understanding of these heuristics. The monograph will
lead the reader through several widely used non-convex optimization techniques,
as well as applications thereof. The goal of this monograph is to both,
introduce the rich literature in this area, as well as equip the reader with
the tools and techniques needed to analyze these simple procedures for
non-convex problems.Comment: The official publication is available from now publishers via
http://dx.doi.org/10.1561/220000005
- …