13,736 research outputs found
Non-convex Optimization for Machine Learning
A vast majority of machine learning algorithms train their models and perform
inference by solving optimization problems. In order to capture the learning
and prediction problems accurately, structural constraints such as sparsity or
low rank are frequently imposed or else the objective itself is designed to be
a non-convex function. This is especially true of algorithms that operate in
high-dimensional spaces or that train non-linear models such as tensor models
and deep networks.
The freedom to express the learning problem as a non-convex optimization
problem gives immense modeling power to the algorithm designer, but often such
problems are NP-hard to solve. A popular workaround to this has been to relax
non-convex problems to convex ones and use traditional methods to solve the
(convex) relaxed optimization problems. However this approach may be lossy and
nevertheless presents significant challenges for large scale optimization.
On the other hand, direct approaches to non-convex optimization have met with
resounding success in several domains and remain the methods of choice for the
practitioner, as they frequently outperform relaxation-based techniques -
popular heuristics include projected gradient descent and alternating
minimization. However, these are often poorly understood in terms of their
convergence and other properties.
This monograph presents a selection of recent advances that bridge a
long-standing gap in our understanding of these heuristics. The monograph will
lead the reader through several widely used non-convex optimization techniques,
as well as applications thereof. The goal of this monograph is to both,
introduce the rich literature in this area, as well as equip the reader with
the tools and techniques needed to analyze these simple procedures for
non-convex problems.Comment: The official publication is available from now publishers via
http://dx.doi.org/10.1561/220000005
Online Embedding Compression for Text Classification using Low Rank Matrix Factorization
Deep learning models have become state of the art for natural language
processing (NLP) tasks, however deploying these models in production system
poses significant memory constraints. Existing compression methods are either
lossy or introduce significant latency. We propose a compression method that
leverages low rank matrix factorization during training,to compress the word
embedding layer which represents the size bottleneck for most NLP models. Our
models are trained, compressed and then further re-trained on the downstream
task to recover accuracy while maintaining the reduced size. Empirically, we
show that the proposed method can achieve 90% compression with minimal impact
in accuracy for sentence classification tasks, and outperforms alternative
methods like fixed-point quantization or offline word embedding compression. We
also analyze the inference time and storage space for our method through FLOP
calculations, showing that we can compress DNN models by a configurable ratio
and regain accuracy loss without introducing additional latency compared to
fixed point quantization. Finally, we introduce a novel learning rate schedule,
the Cyclically Annealed Learning Rate (CALR), which we empirically demonstrate
to outperform other popular adaptive learning rate algorithms on a sentence
classification benchmark.Comment: Accepted in Thirty-Third AAAI Conference on Artificial Intelligence
(AAAI 2019
Matrix Completion on Graphs
The problem of finding the missing values of a matrix given a few of its
entries, called matrix completion, has gathered a lot of attention in the
recent years. Although the problem under the standard low rank assumption is
NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number
of observed entries is sufficiently large. In this work, we introduce a novel
matrix completion model that makes use of proximity information about rows and
columns by assuming they form communities. This assumption makes sense in
several real-world problems like in recommender systems, where there are
communities of people sharing preferences, while products form clusters that
receive similar ratings. Our main goal is thus to find a low-rank solution that
is structured by the proximities of rows and columns encoded by graphs. We
borrow ideas from manifold learning to constrain our solution to be smooth on
these graphs, in order to implicitly force row and column proximities. Our
matrix recovery model is formulated as a convex non-smooth optimization
problem, for which a well-posed iterative scheme is provided. We study and
evaluate the proposed matrix completion on synthetic and real data, showing
that the proposed structured low-rank recovery model outperforms the standard
matrix completion model in many situations.Comment: Version of NIPS 2014 workshop "Out of the Box: Robustness in High
Dimension
GMRES-Accelerated ADMM for Quadratic Objectives
We consider the sequence acceleration problem for the alternating direction
method-of-multipliers (ADMM) applied to a class of equality-constrained
problems with strongly convex quadratic objectives, which frequently arise as
the Newton subproblem of interior-point methods. Within this context, the ADMM
update equations are linear, the iterates are confined within a Krylov
subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its
ability to accelerate convergence. The basic ADMM method solves a
-conditioned problem in iterations. We give
theoretical justification and numerical evidence that the GMRES-accelerated
variant consistently solves the same problem in iterations
for an order-of-magnitude reduction in iterations, despite a worst-case bound
of iterations. The method is shown to be competitive against
standard preconditioned Krylov subspace methods for saddle-point problems. The
method is embedded within SeDuMi, a popular open-source solver for conic
optimization written in MATLAB, and used to solve many large-scale semidefinite
programs with error that decreases like , instead of ,
where is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on
Optimization (SIOPT
- …