432 research outputs found
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
We study Frank-Wolfe methods for nonconvex stochastic and finite-sum
optimization problems. Frank-Wolfe methods (in the convex case) have gained
tremendous recent interest in machine learning and optimization communities due
to their projection-free property and their ability to exploit structured
constraints. However, our understanding of these algorithms in the nonconvex
setting is fairly limited. In this paper, we propose nonconvex stochastic
Frank-Wolfe methods and analyze their convergence properties. For objective
functions that decompose into a finite-sum, we leverage ideas from variance
reduction techniques for convex optimization to obtain new variance reduced
nonconvex Frank-Wolfe methods that have provably faster convergence than the
classical Frank-Wolfe method. Finally, we show that the faster convergence
rates of our variance reduced methods also translate into improved convergence
rates for the stochastic setting
Escaping Saddle Points with Adaptive Gradient Methods
Adaptive methods such as Adam and RMSProp are widely used in deep learning
but are not well understood. In this paper, we seek a crisp, clean and precise
characterization of their behavior in nonconvex settings. To this end, we first
provide a novel view of adaptive methods as preconditioned SGD, where the
preconditioner is estimated in an online manner. By studying the preconditioner
on its own, we elucidate its purpose: it rescales the stochastic gradient noise
to be isotropic near stationary points, which helps escape saddle points.
Furthermore, we show that adaptive methods can efficiently estimate the
aforementioned preconditioner. By gluing together these two components, we
provide the first (to our knowledge) second-order convergence result for any
adaptive method. The key insight from our analysis is that, compared to SGD,
adaptive methods escape saddle points faster, and can converge faster overall
to second-order stationary points.Comment: Update Theorem 4.1 and proof to use martingale concentration bounds,
i.e. matrix Freedma
A Generic Approach for Escaping Saddle points
A central challenge to using first-order methods for optimizing nonconvex
problems is the presence of saddle points. First-order methods often get stuck
at saddle points, greatly deteriorating their performance. Typically, to escape
from saddles one has to use second-order methods. However, most works on
second-order methods rely extensively on expensive Hessian-based computations,
making them impractical in large-scale settings. To tackle this challenge, we
introduce a generic framework that minimizes Hessian based computations while
at the same time provably converging to second-order critical points. Our
framework carefully alternates between a first-order and a second-order
subroutine, using the latter only close to saddle points, and yields
convergence results competitive to the state-of-the-art. Empirical results
suggest that our strategy also enjoys a good practical performance
Statistically Motivated Second Order Pooling
Second-order pooling, a.k.a.~bilinear pooling, has proven effective for deep
learning based visual recognition. However, the resulting second-order networks
yield a final representation that is orders of magnitude larger than that of
standard, first-order ones, making them memory-intensive and cumbersome to
deploy. Here, we introduce a general, parametric compression strategy that can
produce more compact representations than existing compression techniques, yet
outperform both compressed and uncompressed second-order models. Our approach
is motivated by a statistical analysis of the network's activations, relying on
operations that lead to a Gaussian-distributed final representation, as
inherently used by first-order deep networks. As evidenced by our experiments,
this lets us outperform the state-of-the-art first-order and second-order
models on several benchmark recognition datasets.Comment: Accepted to ECCV 2018. Camera ready version. 14 page, 5 figures, 3
table
HIPAD - A Hybrid Interior-Point Alternating Direction algorithm for knowledge-based SVM and feature selection
We consider classification tasks in the regime of scarce labeled training
data in high dimensional feature space, where specific expert knowledge is also
available. We propose a new hybrid optimization algorithm that solves the
elastic-net support vector machine (SVM) through an alternating direction
method of multipliers in the first phase, followed by an interior-point method
for the classical SVM in the second phase. Both SVM formulations are adapted to
knowledge incorporation. Our proposed algorithm addresses the challenges of
automatic feature selection, high optimization accuracy, and algorithmic
flexibility for taking advantage of prior knowledge. We demonstrate the
effectiveness and efficiency of our algorithm and compare it with existing
methods on a collection of synthetic and real-world data.Comment: Proceedings of 8th Learning and Intelligent OptimizatioN (LION8)
Conference, 201
- …