The current paper studies the problem of minimizing a loss
f(x) subject to constraints of the form
DxβS, where S is a closed set, convex or not,
and D is a fusion matrix. Fusion constraints can capture
smoothness, sparsity, or more general constraint patterns. To tackle this
generic class of problems, we combine the Beltrami-Courant penalty method of
optimization with the proximal distance principle. The latter is driven by
minimization of penalized objectives
f(x)+2Οβdist(Dx,S)2
involving large tuning constants Ο and the squared Euclidean distance of
Dx from S. The next iterate
xn+1β of the corresponding proximal distance algorithm is
constructed from the current iterate xnβ by minimizing the
majorizing surrogate function
f(x)+2Οββ₯DxβPSβ(Dxnβ)β₯2.
For fixed Ο and convex f(x) and S, we prove convergence,
provide convergence rates, and demonstrate linear convergence under stronger
assumptions. We also construct a steepest descent (SD) variant to avoid costly
linear system solves. To benchmark our algorithms, we adapt the alternating
direction method of multipliers (ADMM) and compare on extensive numerical tests
including problems in metric projection, convex regression, convex clustering,
total variation image denoising, and projection of a matrix to one that has a
good condition number. Our experiments demonstrate the superior speed and
acceptable accuracy of the steepest variant on high-dimensional problems. Julia
code to replicate all of our experiments can be found at
https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.Comment: 35 pages (22 main text, 10 appendices, 3 references), 9 tables, 1
figur