28,822 research outputs found

    Algorithmic stability and hypothesis complexity

    Full text link
    © 2017 by the author(s). We introduce a notion of algorithmic stability of learning algorithms-that we term argument stability-that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its argument stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent

    The complexity of algorithmic hypothesis class

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Statistical learning theory provides the mathematical and theoretical foundations for statistical learning algorithms and inspires the development of more efficient methods. It is observed that learning algorithms may not output some hypotheses in the predefined hypothesis class. Therefore, in this thesis, we focus on statistical learning theory and study how to measure the complexity of the algorithmic hypothesis class, which is a subset of the predefined hypothesis class that a learning algorithm will (or is likely to) output. By designing complexity measures for the algorithmic hypothesis class, we provide new generalization bounds for k-dimensional coding schemes and multi-task learning and propose two frameworks to derive tighter generalization bounds than the current state-of-the-art. We take k-dimensional coding schemes, a set of unsupervised learning algorithms, and multi-task learning, a set of supervised learning algorithms, as examples to demonstrate that learning algorithm outputs may have special properties and are therefore included in a subset of the predefined hypothesis class. By analyzing the subsets (or the algorithmic hypothesis classes), we shed new light on learning problems and derive tighter generalization bounds than the current state-of-the-art. Specifically, for k-dimensional coding schemes, we show that the induced algorithmic loss function classes are sets of Lipschitz-continuous hypotheses and that a dimensionality-dependent complexity measure helps to derive small Lipschitz constants and thus improve the generalization bounds. For multi-task learning, we prove that tasks can act as regularizer and that feature structures can contribute to a small algorithmic hypothesis class and also help to improve the generalization bounds. To more precisely exploit algorithmic hypothesis class complexity by considering the hypothesis and feature structure properties, we extend algorithmic robustness and stability to complexity measures for the hypothesis class. Inspired by the idea of algorithmic robustness, we propose the complexity measure of uniform robustness. Compared to the Rademacher complexity, our measure more finely considers the geometric information of data. For example, when the sample space is covered by a small number of small radius and widely separated balls, the uniform robustness can be very small while the Rademacher complexity can be very large. Moreover, based on the definition of uniform robustness, we also provide a framework to derive generalization bounds for a very general class of learning algorithms. We exploit the algorithmic hypothesis class of stable algorithms by studying the definition of algorithmic stability. Stable learning algorithms have the property that their outputs will not change much when one training example is changed. This implies that their outputs will not be sufficiently far apart, even though the training sample is completely altered. Thus, stable learning algorithms often have small algorithmic hypothesis classes. However, since measuring the complexity of the small algorithmic hypothesis class is unknown, we design a novel complexity measure called the algorithmic Rademacher complexity to measure the algorithmic hypothesis class of stable learning algorithms and provide sharper error bounds than the current state-of-the-art

    M-Power Regularized Least Squares Regression

    Full text link
    Regularization is used to find a solution that both fits the data and is sufficiently smooth, and thereby is very effective for designing and refining learning algorithms. But the influence of its exponent remains poorly understood. In particular, it is unclear how the exponent of the reproducing kernel Hilbert space~(RKHS) regularization term affects the accuracy and the efficiency of kernel-based learning algorithms. Here we consider regularized least squares regression (RLSR) with an RKHS regularization raised to the power of m, where m is a variable real exponent. We design an efficient algorithm for solving the associated minimization problem, we provide a theoretical analysis of its stability, and we compare its advantage with respect to computational complexity, speed of convergence and prediction accuracy to the classical kernel ridge regression algorithm where the regularization exponent m is fixed at 2. Our results show that the m-power RLSR problem can be solved efficiently, and support the suggestion that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm
    corecore