204 research outputs found
Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging
We consider a recursive algorithm to construct an aggregated estimator from a
finite number of base decision rules in the classification problem. The
estimator approximately minimizes a convex risk functional under the
l1-constraint. It is defined by a stochastic version of the mirror descent
algorithm (i.e., of the method which performs gradient descent in the dual
space) with an additional averaging. The main result of the paper is an upper
bound for the expected accuracy of the proposed estimator. This bound is of the
order with an explicit and small constant factor, where
is the dimension of the problem and stands for the sample size. A similar
bound is proved for a more general setting that covers, in particular, the
regression model with squared loss.Comment: 29 pages; mai 200
A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality
Stochastic mirror descent (SMD) is a fairly new family of algorithms that has
recently found a wide range of applications in optimization, machine learning,
and control. It can be considered a generalization of the classical stochastic
gradient algorithm (SGD), where instead of updating the weight vector along the
negative direction of the stochastic gradient, the update is performed in a
"mirror domain" defined by the gradient of a (strictly convex) potential
function. This potential function, and the mirror domain it yields, provides
considerable flexibility in the algorithm compared to SGD. While many
properties of SMD have already been obtained in the literature, in this paper
we exhibit a new interpretation of SMD, namely that it is a risk-sensitive
optimal estimator when the unknown weight vector and additive noise are
non-Gaussian and belong to the exponential family of distributions. The
analysis also suggests a modified version of SMD, which we refer to as
symmetric SMD (SSMD). The proofs rely on some simple properties of Bregman
divergence, which allow us to extend results from quadratics and Gaussians to
certain convex functions and exponential families in a rather seamless way
Training Deep Networks without Learning Rates Through Coin Betting
Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning rate free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
We study boosting algorithms for learning to rank. We give a general margin-based bound for
ranking based on covering numbers for the hypothesis space. Our bound suggests that algorithms
that maximize the ranking margin will generalize well. We then describe a new algorithm, smooth
margin ranking, that precisely converges to a maximum ranking-margin solution. The algorithm
is a modification of RankBoost, analogous to “approximate coordinate ascent boosting.” Finally,
we prove that AdaBoost and RankBoost are equally good for the problems of bipartite ranking and
classification in terms of their asymptotic behavior on the training set. Under natural conditions,
AdaBoost achieves an area under the ROC curve that is equally as good as RankBoost’s; furthermore,
RankBoost, when given a specific intercept, achieves a misclassification error that is as good
as AdaBoost’s. This may help to explain the empirical observations made by Cortes andMohri, and
Caruana and Niculescu-Mizil, about the excellent performance of AdaBoost as a bipartite ranking
algorithm, as measured by the area under the ROC curve
A Modern Introduction to Online Learning
In this monograph, I introduce the basic concepts of Online Learning through
a modern view of Online Convex Optimization. Here, online learning refers to
the framework of regret minimization under worst-case assumptions. I present
first-order and second-order algorithms for online learning with convex losses,
in Euclidean and non-Euclidean settings. All the algorithms are clearly
presented as instantiation of Online Mirror Descent or
Follow-The-Regularized-Leader and their variants. Particular attention is given
to the issue of tuning the parameters of the algorithms and learning in
unbounded domains, through adaptive and parameter-free online learning
algorithms. Non-convex losses are dealt through convex surrogate losses and
through randomization. The bandit setting is also briefly discussed, touching
on the problem of adversarial and stochastic multi-armed bandits. These notes
do not require prior knowledge of convex analysis and all the required
mathematical tools are rigorously explained. Moreover, all the proofs have been
carefully chosen to be as simple and as short as possible.Comment: Fixed more typos, added more history bits, added local norms bounds
for OMD and FTR
Federated Hypergradient Descent
In this work, we explore combining automatic hyperparameter tuning and
optimization for federated learning (FL) in an online, one-shot procedure. We
apply a principled approach on a method for adaptive client learning rate,
number of local steps, and batch size. In our federated learning applications,
our primary motivations are minimizing communication budget as well as local
computational resources in the training pipeline. Conventionally,
hyperparameter tuning methods involve at least some degree of trial-and-error,
which is known to be sample inefficient. In order to address our motivations,
we propose FATHOM (Federated AuTomatic Hyperparameter OptiMization) as a
one-shot online procedure. We investigate the challenges and solutions of
deriving analytical gradients with respect to the hyperparameters of interest.
Our approach is inspired by the fact that, with the exception of local data, we
have full knowledge of all components involved in our training process, and
this fact can be exploited in our algorithm impactfully. We show that FATHOM is
more communication efficient than Federated Averaging (FedAvg) with optimized,
static valued hyperparameters, and is also more computationally efficient
overall. As a communication efficient, one-shot online procedure, FATHOM solves
the bottleneck of costly communication and limited local computation, by
eliminating a potentially wasteful tuning process, and by optimizing the
hyperparamters adaptively throughout the training procedure without
trial-and-error. We show our numerical results through extensive empirical
experiments with the Federated EMNIST-62 (FEMNIST) and Federated Stack Overflow
(FSO) datasets, using FedJAX as our baseline framework
- …