881 research outputs found
Variational image regularization with Euler's elastica using a discrete gradient scheme
This paper concerns an optimization algorithm for unconstrained non-convex
problems where the objective function has sparse connections between the
unknowns. The algorithm is based on applying a dissipation preserving numerical
integrator, the Itoh--Abe discrete gradient scheme, to the gradient flow of an
objective function, guaranteeing energy decrease regardless of step size. We
introduce the algorithm, prove a convergence rate estimate for non-convex
problems with Lipschitz continuous gradients, and show an improved convergence
rate if the objective function has sparse connections between unknowns. The
algorithm is presented in serial and parallel versions. Numerical tests show
its use in Euler's elastica regularized imaging problems and its convergence
rate and compare the execution time of the method to that of the iPiano
algorithm and the gradient descent and Heavy-ball algorithms
On the matrix square root via geometric optimization
This paper is triggered by the preprint "\emph{Computing Matrix Squareroot
via Non Convex Local Search}" by Jain et al.
(\textit{\textcolor{blue}{arXiv:1507.05854}}), which analyzes gradient-descent
for computing the square root of a positive definite matrix. Contrary to claims
of~\citet{jain2015}, our experiments reveal that Newton-like methods compute
matrix square roots rapidly and reliably, even for highly ill-conditioned
matrices and without requiring commutativity. We observe that gradient-descent
converges very slowly primarily due to tiny step-sizes and ill-conditioning. We
derive an alternative first-order method based on geodesic convexity: our
method admits a transparent convergence analysis ( page), attains linear
rate, and displays reliable convergence even for rank deficient problems.
Though superior to gradient-descent, ultimately our method is also outperformed
by a well-known scaled Newton method. Nevertheless, the primary value of our
work is its conceptual value: it shows that for deriving gradient based methods
for the matrix square root, \emph{the manifold geometric view of positive
definite matrices can be much more advantageous than the Euclidean view}.Comment: 8 pages, 12 plots, this version contains several more references and
more words about the rank-deficient cas
- …