44 research outputs found

    Semi-supervised Vector-valued Learning: From Theory to Algorithm

    Full text link
    Vector-valued learning, where the output space admits a vector-valued structure, is an important problem that covers a broad family of important domains, e.g. multi-label learning and multi-class classification. Using local Rademacher complexity and unlabeled data, we derive novel data-dependent excess risk bounds for learning vector-valued functions in both the kernel space and linear space. The derived bounds are much sharper than existing ones, where convergence rates are improved from O(1/n)\mathcal{O}(1/\sqrt{n}) to O(1/n+u),\mathcal{O}(1/\sqrt{n+u}), and O(1/n)\mathcal{O}(1/n) in special cases. Motivated by our theoretical analysis, we propose a unified framework for learning vector-valued functions, incorporating both local Rademacher complexity and Laplacian regularization. Empirical results on a wide number of benchmark datasets show that the proposed algorithm significantly outperforms baseline methods, which coincides with our theoretical findings

    On sparse representations and new meta-learning paradigms for representation learning

    Get PDF
    Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it translates to learning to learn itself. The presentation unfolds in two parts. In the first part, we establish learning theoretic results for learning sparse representations. The second part introduces new multi-task and meta-learning paradigms for representation learning. On the sparse representations front, our main pursuits are generalization error bounds to support a supervised dictionary learning model for Lasso-style sparse coding. Such predictive sparse coding algorithms have been applied with much success in the literature; even more common have been applications of unsupervised sparse coding followed by supervised linear hypothesis learning. We present two generalization error bounds for predictive sparse coding, handling the overcomplete setting (more original dimensions than learned features) and the infinite-dimensional setting. Our analysis led to a fundamental stability result for the Lasso that shows the stability of the solution vector to design matrix perturbations. We also introduce and analyze new multi-task models for (unsupervised) sparse coding and predictive sparse coding, allowing for one dictionary per task but with sharing between the tasks' dictionaries. The second part introduces new meta-learning paradigms to realize unprecedented types of learning guarantees for meta-learning. Specifically sought are guarantees on a meta-learner's performance on new tasks encountered in an environment of tasks. Nearly all previous work produced bounds on the expected risk, whereas we produce tail bounds on the risk, thereby providing performance guarantees on the risk for a single new task drawn from the environment. The new paradigms include minimax multi-task learning (minimax MTL) and sample variance penalized meta-learning (SVP-ML). Regarding minimax MTL, we provide a high probability learning guarantee on its performance on individual tasks encountered in the future, the first of its kind. We also present two continua of meta-learning formulations, each interpolating between classical multi-task learning and minimax multi-task learning. The idea of SVP-ML is to minimize the task average of the training tasks' empirical risks plus a penalty on their sample variance. Controlling this sample variance can potentially yield a faster rate of decrease for upper bounds on the expected risk of new tasks, while also yielding high probability guarantees on the meta-learner's average performance over a draw of new test tasks. An algorithm is presented for SVP-ML with feature selection representations, as well as a quite natural convex relaxation of the SVP-ML objective.Ph.D

    Machine à vecteurs de support hyperbolique et ingénierie du noyau

    Get PDF
    Statistical learning theory is a field of inferential statistics whose foundations were laid by Vapnik at the end of the 1960s. It is considered a subdomain of artificial intelligence. In machine learning, support vector machines (SVM) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. In this thesis, our aim is to propose two new statistical learning problems: one on the conception and evaluation of a multi-class SVM extension and another on the design of a new kernel for support vectors machines. First, we introduced a new kernel machine for multi-class pattern recognition : the hyperbolic support vector machine. Geometrically, it is characterized by the fact that its decision boundaries in the feature space are defined by hyperbolic functions. We then established its main statistical properties. Among these properties we showed that the classes of component functions are uniform Glivenko-Cantelli, this by establishing an upper bound of the Rademacher complexity. Finally, we establish a guaranteed risk for our classifier. Second, we constructed a new kernel based on the Fourier transform of a Gaussian mixture model. We proceed in the following way: first, each class is fragmented into a number of relevant subclasses, then we consider the directions given by the vectors obtained by taking all pairs of subclass centers of the same class. Among these are excluded those allowing to connect two subclasses of two different classes. We can also see this as the search for translation invariance in each class. It successfully on several datasets in the context of machine learning using multiclass support vector machines.La théorie statistique de l’apprentissage est un domaine de la statistique inférentielle dont les fondements ont été posés par Vapnik à la fin des années 60. Il est considéré comme un sous-domaine de l’intelligence artificielle. Dans l’apprentissage automatique, les machines à vecteurs de support (SVM) sont un ensemble de techniques d’apprentissage supervisé destinées à résoudre des problèmes de discrimination et de régression. Dans cette thèse, notre objectif est de proposer deux nouveaux problèmes d’apprentissage statistique: Un portant sur la conception et l’évaluation d’une extension des SVM multiclasses et un autre sur la conception d’un nouveau noyau pour les machines à vecteurs de support. Dans un premier temps, nous avons introduit une nouvelle machine à noyau pour la reconnaissance de modèle multi-classe: la machine à vecteur de support hyperbolique. Géométriquement, il est caractérisé par le fait que ses surfaces de décision dans l’espace de redescription sont définies par des fonctions hyperboliques. Nous avons ensuite établi ses principales propriétés statistiques. Parmi ces propriétés nous avons montré que les classes de fonctions composantes sont des classes de Glivenko-Cantelli uniforme, ceci en établissant un majorant de la complexité de Rademacher. Enfin, nous établissons un risque garanti pour notre classifieur.Dans un second temps, nous avons créer un nouveau noyau s’appuyant sur la transformation de Fourier d’un modèle de mélange gaussien. Nous procédons de la manière suivante: d’abord, chaque classe est fragmentée en un nombre de sous-classes pertinentes, ensuite on considère les directions données par les vecteurs obtenus en prenant toutes les paires de centres de sous-classes d’une même classe. Parmi celles-ci, sont exclues celles permettant de connecter deux sous-classes de deux classes différentes. On peut aussi voir cela comme la recherche d’invariance par translation dans chaque classe. Nous l’avons appliqué avec succès sur plusieurs jeux de données dans le contexte d’un apprentissage automatique utilisant des machines à vecteurs support multi-classes

    Studies on Kernel Learning and Independent Component Analysis

    Get PDF
    A crucial step in kernel-based learning is the selection of a proper kernel function or kernel matrix. Multiple kernel learning (MKL), in which a set of kernels are assessed during the learning time, was recently proposed to solve the kernel selection problem. The goal is to estimate a suitable kernel matrix by adjusting a linear combination of the given kernels so that the empirical risk is minimized. MKL is usually a memory demanding optimization problem, which becomes a barrier for large samples. This study proposes an efficient method for kernel learning by using the low rank property of large kernel matrices which is often observed in applications. The proposed method involves selecting a few eigenvectors of kernel bases and taking a sparse combination of them by minimizing the empirical risk. Empirical results show that the computational demands decrease significantly without compromising classification accuracy, when compared with previous MKL methods. Computing an upper bound for complexity of the hypothesis set generated by the learned kernel as above is challenging. Here, a novel bound is presented which shows that the Gaussian complexity of such hypothesis set is controlled by the logarithm of the number of involved eigenvectors and their maximum distance, i.e. the geometry of the basis set. This geometric bound sheds more light on the selection of kernel bases, which could not be obtained from previous results. The rest of this study is a step toward utilizing the statistical learning theory to analyze independent component analysis estimators such as FastICA. This thesis provides a sample convergence analysis for FastICA estimator and shows that the estimations converge in distribution as the number of samples increase. Additionally, similar results for the bootstrap FastICA are established. A direct application of these results is to design a hypothesis testing to study the convergence of the estimates

    Escaping the Curse of Dimensionality in Similarity Learning: Efficient Frank-Wolfe Algorithm and Generalization Bounds

    Get PDF
    Similarity and metric learning provides a principled approach to construct a task-specific similarity from weakly supervised data. However, these methods are subject to the curse of dimensionality: as the number of features grows large, poor generalization is to be expected and training becomes intractable due to high computational and memory costs. In this paper, we propose a similarity learning method that can efficiently deal with high-dimensional sparse data. This is achieved through a parameterization of similarity functions by convex combinations of sparse rank-one matrices, together with the use of a greedy approximate Frank-Wolfe algorithm which provides an efficient way to control the number of active features. We show that the convergence rate of the algorithm, as well as its time and memory complexity, are independent of the data dimension. We further provide a theoretical justification of our modeling choices through an analysis of the generalization error, which depends logarithmically on the sparsity of the solution rather than on the number of features. Our experiments on datasets with up to one million features demonstrate the ability of our approach to generalize well despite the high dimensionality as well as its superiority compared to several competing methods

    Escaping the Curse of Dimensionality in Similarity Learning: Efficient Frank-Wolfe Algorithm and Generalization Bounds

    Get PDF
    International audienceSimilarity and metric learning provides a principled approach to construct a task-specific similarity from weakly supervised data. However, these methods are subject to the curse of dimensionality: as the number of features grows large, poor generalization is to be expected and training becomes intractable due to high computational and memory costs. In this paper, we propose a similarity learning method that can efficiently deal with high-dimensional sparse data. This is achieved through a parameterization of similarity functions by convex combinations of sparse rank-one matrices, together with the use of a greedy approximate Frank-Wolfe algorithm which provides an efficient way to control the number of active features. We show that the convergence rate of the algorithm, as well as its time and memory complexity, are independent of the data dimension. We further provide a theoretical justification of our modeling choices through an analysis of the generalization error, which depends logarithmically on the sparsity of the solution rather than on the number of features. Our experiments on datasets with up to one million features demonstrate the ability of our approach to generalize well despite the high dimensionality as well as its superiority compared to several competing methods
    corecore