1,048 research outputs found
Accelerated projected gradient algorithms for sparsity constrained optimization problems
We consider the projected gradient algorithm for the nonconvex best subset
selection problem that minimizes a given empirical loss function under an
-norm constraint. Through decomposing the feasible set of the given
sparsity constraint as a finite union of linear subspaces, we present two
acceleration schemes with global convergence guarantees, one by same-space
extrapolation and the other by subspace identification. The former fully
utilizes the problem structure to greatly accelerate the optimization speed
with only negligible additional cost. The latter leads to a two-stage
meta-algorithm that first uses classical projected gradient iterations to
identify the correct subspace containing an optimal solution, and then switches
to a highly-efficient smooth optimization method in the identified subspace to
attain superlinear convergence. Experiments demonstrate that the proposed
accelerated algorithms are magnitudes faster than their non-accelerated
counterparts as well as the state of the art.Comment: 33 page
Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network
We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm
for training structured neural networks. Similar to existing regularized
adaptive methods, the subproblem for computing the update direction of RAMDA
involves a nonsmooth regularizer and a diagonal preconditioner, and therefore
does not possess a closed-form solution in general. We thus also carefully
devise an implementable inexactness condition that retains convergence
guarantees similar to the exact versions, and propose a companion efficient
solver for the subproblems of both RAMDA and existing methods to make them
practically feasible. We leverage the theory of manifold identification in
variational analysis to show that, even in the presence of such inexactness,
the iterates of RAMDA attain the ideal structure induced by the regularizer at
the stationary point of asymptotic convergence. This structure is locally
optimal near the point of convergence, so RAMDA is guaranteed to obtain the
best structure possible among all methods converging to the same point, making
it the first regularized adaptive method outputting models that possess
outstanding predictive performance while being (locally) optimally structured.
Extensive numerical experiments in large-scale modern computer vision, language
modeling, and speech tasks show that the proposed RAMDA is efficient and
consistently outperforms state of the art for training structured neural
network. Implementation of our algorithm is available at
http://www.github.com/ismoptgroup/RAMDA/
- β¦