93 research outputs found
Passive Learning with Target Risk
In this paper we consider learning in passive setting but with a slight
modification. We assume that the target expected loss, also referred to as
target risk, is provided in advance for learner as prior knowledge. Unlike most
studies in the learning theory that only incorporate the prior knowledge into
the generalization bounds, we are able to explicitly utilize the target risk in
the learning process. Our analysis reveals a surprising result on the sample
complexity of learning: by exploiting the target risk in the learning
algorithm, we show that when the loss function is both strongly convex and
smooth, the sample complexity reduces to \O(\log (\frac{1}{\epsilon})), an
exponential improvement compared to the sample complexity
\O(\frac{1}{\epsilon}) for learning with strongly convex loss functions.
Furthermore, our proof is constructive and is based on a computationally
efficient stochastic optimization algorithm for such settings which demonstrate
that the proposed algorithm is practically useful
Exploiting Smoothness in Statistical Learning, Sequential Prediction, and Stochastic Optimization
In the last several years, the intimate connection between convex
optimization and learning problems, in both statistical and sequential
frameworks, has shifted the focus of algorithmic machine learning to examine
this interplay. In particular, on one hand, this intertwinement brings forward
new challenges in reassessment of the performance of learning algorithms
including generalization and regret bounds under the assumptions imposed by
convexity such as analytical properties of loss functions (e.g., Lipschitzness,
strong convexity, and smoothness). On the other hand, emergence of datasets of
an unprecedented size, demands the development of novel and more efficient
optimization algorithms to tackle large-scale learning problems.
The overarching goal of this thesis is to reassess the smoothness of loss
functions in statistical learning, sequential prediction/online learning, and
stochastic optimization and explicate its consequences. In particular we
examine how smoothness of loss function could be beneficial or detrimental in
these settings in terms of sample complexity, statistical consistency, regret
analysis, and convergence rate, and investigate how smoothness can be leveraged
to devise more efficient learning algorithms.Comment: Ph.D. Thesi
- …