3,506 research outputs found
Projected gradient descent for non-convex sparse spike estimation
We propose a new algorithm for sparse spike estimation from Fourier
measurements. Based on theoretical results on non-convex optimization
techniques for off-the-grid sparse spike estimation, we present a projected
gradient descent algorithm coupled with a spectral initialization procedure.
Our algorithm permits to estimate the positions of large numbers of Diracs in
2d from random Fourier measurements. We present, along with the algorithm,
theoretical qualitative insights explaining the success of our algorithm. This
opens a new direction for practical off-the-grid spike estimation with
theoretical guarantees in imaging applications
A Nonconvex Projection Method for Robust PCA
Robust principal component analysis (RPCA) is a well-studied problem with the
goal of decomposing a matrix into the sum of low-rank and sparse components. In
this paper, we propose a nonconvex feasibility reformulation of RPCA problem
and apply an alternating projection method to solve it. To the best of our
knowledge, we are the first to propose a method that solves RPCA problem
without considering any objective function, convex relaxation, or surrogate
convex constraints. We demonstrate through extensive numerical experiments on a
variety of applications, including shadow removal, background estimation, face
detection, and galaxy evolution, that our approach matches and often
significantly outperforms current state-of-the-art in various ways.Comment: In the proceedings of Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
An Infeasible-Point Subgradient Method Using Adaptive Approximate Projections
We propose a new subgradient method for the minimization of nonsmooth convex
functions over a convex set. To speed up computations we use adaptive
approximate projections only requiring to move within a certain distance of the
exact projections (which decreases in the course of the algorithm). In
particular, the iterates in our method can be infeasible throughout the whole
procedure. Nevertheless, we provide conditions which ensure convergence to an
optimal feasible point under suitable assumptions. One convergence result deals
with step size sequences that are fixed a priori. Two other results handle
dynamic Polyak-type step sizes depending on a lower or upper estimate of the
optimal objective function value, respectively. Additionally, we briefly sketch
two applications: Optimization with convex chance constraints, and finding the
minimum l1-norm solution to an underdetermined linear system, an important
problem in Compressed Sensing.Comment: 36 pages, 3 figure
- …