2,651 research outputs found
A Theoretical Analysis of Contrastive Unsupervised Representation Learning
Recent empirical works have successfully used unlabeled data to learn feature
representations that are broadly useful in downstream classification tasks.
Several of these methods are reminiscent of the well-known word2vec embedding
algorithm: leveraging availability of pairs of semantically "similar" data
points and "negative samples," the learner forces the inner product of
representations of similar pairs with each other to be higher on average than
with negative samples. The current paper uses the term contrastive learning for
such algorithms and presents a theoretical framework for analyzing them by
introducing latent classes and hypothesizing that semantically similar points
are sampled from the same latent class. This framework allows us to show
provable guarantees on the performance of the learned representations on the
average classification task that is comprised of a subset of the same set of
latent classes. Our generalization bound also shows that learned
representations can reduce (labeled) sample complexity on downstream tasks. We
conduct controlled experiments in both the text and image domains to support
the theory.Comment: 19 pages, 5 figure
A Nonstochastic Control Approach to Optimization
Selecting the best hyperparameters for a particular optimization instance,
such as the learning rate and momentum, is an important but nonconvex problem.
As a result, iterative optimization methods such as hypergradient descent lack
global optimality guarantees in general.
We propose an online nonstochastic control methodology for mathematical
optimization. First, we formalize the setting of meta-optimization, an online
learning formulation of learning the best optimization algorithm from a class
of methods. The meta-optimization problem over gradient-based methods can be
framed as a feedback control problem over the choice of hyperparameters,
including the learning rate, momentum, and the preconditioner.
Although the original optimal control problem is nonconvex, we show how
recent methods from online nonstochastic control using convex relaxations can
be used to circumvent the nonconvexity, and obtain regret guarantees vs. the
best offline solution. This guarantees that in meta-optimization, given a
sequence of optimization problems, we can learn a method that attains
convergence comparable to that of the best optimization method in hindsight
from a class of methods
- …