We consider the problem of supervised learning with convex loss functions and
propose a new form of iterative regularization based on the subgradient method.
Unlike other regularization approaches, in iterative regularization no
constraint or penalization is considered, and generalization is achieved by
(early) stopping an empirical iteration. We consider a nonparametric setting,
in the framework of reproducing kernel Hilbert spaces, and prove finite sample
bounds on the excess risk under general regularity conditions. Our study
provides a new class of efficient regularized learning algorithms and gives
insights on the interplay between statistics and optimization in machine
learning