21 research outputs found

    Efficient Non-Parametric Function Induction in Semi-Supervised Learning

    Get PDF
    There has been an increase of interest for semi-supervised learning recently, because of the many datasets with large amounts of unlabeled examples and only a few labeled ones. This paper follows up on proposed non-parametric algorithms which provide an estimated continuous label for the given unlabeled examples. It extends them to function induction algorithms that correspond to the minimization of a regularization criterion applied to an out-of-sample example, and happens to have the form of a Parzen windows regressor. The advantage of the extension is that it allows predicting the label for a new example without having to solve again a linear system of dimension 'n' (the number of unlabeled and labeled training examples), which can cost O(n^3). Experiments show that the extension works well, in the sense of predicting a label close to the one that would have been obtained if the test example had been included in the unlabeled set. This relatively efficient function induction procedure can also be used when 'n' is large to approximate the solution by writing it only in terms of a kernel expansion with 'm' Il y a eu un regain d'intérêt récemment pour l'apprentissage semi-supervisé, à cause du grand nombre de bases de données comportant de très nombreux exemples non étiquetés et seulement quelques exemples étiquetés. Cet article poursuit le travail fait sur les algorithmes non paramétriques qui fournissent une étiquette continue estimée pour les exemples non-étiquetés. Il les étend à des algorithmes d'induction fonctionnelle qui correspondent à la minimisation d'un critère de régularisation appliqué à un exemple hors-échantillon, et qui ont la forme d'un régresseur à fenêtre de Parzen. L'avantage de cette extension est qu'elle permet de prédire l'étiquette d'un nouvel exemple sans avoir à résoudre de nouveau un système de dimension 'n' (le nombre d'exemples d'entraînement total), qui peut être de l'ordre de O(n^3). Les expériences montrent que l'extension fonctionne bien, en ce sens que l'étiquette prédite est proche de celle qui aurait été obtenue si l'exemple de test avait fait partie de l'ensemble non étiqueté. Cette procédure d'induction fonctionnelle relativement efficace peut également être utilisée, lorsque 'n' est grand, pour estimer la solution en l'écrivant seulement en fonction d'une expansion à noyau avec 'm'non-parametric models, classification, regression, semi-supervised learning, modèles non paramétriques, classification, régression, apprentissage semi-supervisé

    On Consistency of Graph-based Semi-supervised Learning

    Full text link
    Graph-based semi-supervised learning is one of the most popular methods in machine learning. Some of its theoretical properties such as bounds for the generalization error and the convergence of the graph Laplacian regularizer have been studied in computer science and statistics literatures. However, a fundamental statistical property, the consistency of the estimator from this method has not been proved. In this article, we study the consistency problem under a non-parametric framework. We prove the consistency of graph-based learning in the case that the estimated scores are enforced to be equal to the observed responses for the labeled data. The sample sizes of both labeled and unlabeled data are allowed to grow in this result. When the estimated scores are not required to be equal to the observed responses, a tuning parameter is used to balance the loss function and the graph Laplacian regularizer. We give a counterexample demonstrating that the estimator for this case can be inconsistent. The theoretical findings are supported by numerical studies.Comment: This paper is accepted by 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS

    A random matrix analysis and improvement of semi-supervised learning for large dimensional data

    Full text link
    This article provides an original understanding of the behavior of a class of graph-oriented semi-supervised learning algorithms in the limit of large and numerous data. It is demonstrated that the intuition at the root of these methods collapses in this limit and that, as a result, most of them become inconsistent. Corrective measures and a new data-driven parametrization scheme are proposed along with a theoretical analysis of the asymptotic performances of the resulting approach. A surprisingly close behavior between theoretical performances on Gaussian mixture models and on real datasets is also illustrated throughout the article, thereby suggesting the importance of the proposed analysis for dealing with practical data. As a result, significant performance gains are observed on practical data classification using the proposed parametrization

    Boosting Neural Networks

    Full text link

    Quantity makes quality: learning with partial views

    Get PDF
    In many real world applications, the number of examples to learn from is plentiful, but we can only obtain limited information on each individual example. We study the possibilities of efficient, provably correct, large-scale learning in such settings. The main theme we would like to establish is that large amounts of examples can compensate for the lack of full information on each individual example. The type of partial information we consider can be due to inherent noise or from constraints on the type of interaction with the data source. In particular, we describe and analyze algorithms for budgeted learning, in which the learner can only view a few attributes of each training example (Cesa-Bianchi, Shalev-Shwartz, and Shamir 2010a; 2010c), and algorithms for learning kernel-based predictors, when individual examples are corrupted by random noise (Cesa-Bianchi, Shalev-Shwartz, and Shamir 2010b)

    Inductive hashing on manifolds

    Get PDF
    Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexity of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, Zhenmin Tanghttp://www.pamitc.org/cvpr13
    corecore