We investigate the learning rate of multiple kernel learning (MKL) with
β1β and elastic-net regularizations. The elastic-net regularization is a
composition of an β1β-regularizer for inducing the sparsity and an
β2β-regularizer for controlling the smoothness. We focus on a sparse
setting where the total number of kernels is large, but the number of nonzero
components of the ground truth is relatively small, and show sharper
convergence rates than the learning rates have ever shown for both β1β and
elastic-net regularizations. Our analysis reveals some relations between the
choice of a regularization function and the performance. If the ground truth is
smooth, we show a faster convergence rate for the elastic-net regularization
with less conditions than β1β-regularization; otherwise, a faster
convergence rate for the β1β-regularization is shown.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1095 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org). arXiv admin note: text overlap with
arXiv:1103.043