In this paper we present the nonconvex exterior-point optimization solver
(NExOS) -- a novel first-order algorithm tailored to constrained nonconvex
learning problems. We consider the problem of minimizing a convex function over
nonconvex constraints, where the projection onto the constraint set is
single-valued around local minima. A wide range of nonconvex learning problems
have this structure including (but not limited to) sparse and low-rank
optimization problems. By exploiting the underlying geometry of the constraint
set, NExOS finds a locally optimal point by solving a sequence of penalized
problems with strictly decreasing penalty parameters. NExOS solves each
penalized problem by applying a first-order algorithm, which converges linearly
to a local minimum of the corresponding penalized formulation under regularity
conditions. Furthermore, the local minima of the penalized problems converge to
a local minimum of the original problem as the penalty parameter goes to zero.
We implement NExOS in the open-source Julia package NExOS.jl, which has been
extensively tested on many instances from a wide variety of learning problems.
We demonstrate that our algorithm, in spite of being general purpose,
outperforms specialized methods on several examples of well-known nonconvex
learning problems involving sparse and low-rank optimization. For sparse
regression problems, NExOS finds locally optimal solutions which dominate
glmnet in terms of support recovery, yet its training loss is smaller by an
order of magnitude. For low-rank optimization with real-world data, NExOS
recovers solutions with 3 fold training loss reduction, but with a proportion
of explained variance that is 2 times better compared to the nuclear norm
heuristic.Comment: 40 pages, 6 figure