38 research outputs found

    Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias

    Full text link
    Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, softmax prediction probabilities are often used as a confidence measure, despite the fact that they are known to be overconfident, even for wrong predictions. This phenomenon is particularly intensified in the presence of sample selection bias, i.e., when data labeling is subject to some constraint. To address this issue, we propose a novel confidence measure, called T\mathcal{T}-similarity, built upon the prediction diversity of an ensemble of linear classifiers. We provide the theoretical analysis of our approach by studying stationary points and describing the relationship between the diversity of the individual members and their performance. We empirically demonstrate the benefit of our confidence measure for three different pseudo-labeling policies on classification datasets of various data modalities

    A survey on domain adaptation theory: learning bounds and theoretical guarantees

    Full text link
    All famous machine learning algorithms that comprise both supervised and semi-supervised learning work well only under a common assumption: the training and test data follow the same distribution. When the distribution changes, most statistical models must be reconstructed from newly collected data, which for some applications can be costly or impossible to obtain. Therefore, it has become necessary to develop approaches that reduce the need and the effort to obtain new labeled samples by exploiting data that are available in related areas, and using these further across similar fields. This has given rise to a new machine learning framework known as transfer learning: a learning setting inspired by the capability of a human being to extrapolate knowledge across tasks to learn more efficiently. Despite a large amount of different transfer learning scenarios, the main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning, called domain adaptation. In this sub-field, the data distribution is assumed to change across the training and the test data, while the learning task remains the same. We provide a first up-to-date description of existing results related to domain adaptation problem that cover learning bounds based on different statistical learning frameworks

    Towards Better Understanding Meta-learning Methods through Multi-task Representation Learning Theory

    Full text link
    In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the generalization capacity of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on classic few-shot classification and continual learning benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice of training popular meta-learning methods.Comment: 21 pages, 7 figures, 7 table

    Factorisation matricielle non-négative pour l'apprentissage par transfert

    No full text
    L’apprentissage par transfert consiste `a utiliser un jeu de taches pour influencerl’apprentissage et améliorer les performances sur une autre tache.Cependant, ce paradigme d’apprentissage peut en réalité gêner les performancessi les taches (sources et cibles) sont trop dissemblables. Un défipour l’apprentissage par transfert est donc de développer des approchesqui détectent et évitent le transfert négatif des connaissances utilisant tr`espeu d’informations sur la tache cible. Un cas particulier de ce type d’apprentissageest l’adaptation de domaine. C’est une situation o`u les tachessources et cibles sont identiques mais dans des domaines différents. Danscette thèse, nous proposons des approches adaptatives basées sur la factorisationmatricielle non-figurative permettant ainsi de trouver une représentationadéquate des données pour ce type d’apprentissage. En effet, unereprésentation utile rend généralement la structure latente dans les donnéesexplicite, et réduit souvent la dimensionnalité´e des données afin que d’autresméthodes de calcul puissent être appliquées. Nos contributions dans cettethèse s’articulent autour de deux dimensions complémentaires : théoriqueet pratique.Tout d’abord, nous avons propose deux méthodes différentes pour résoudrele problème de l’apprentissage par transfert non supervise´e bas´e sur destechniques de factorisation matricielle non-négative. La première méthodeutilise une procédure d’optimisation itérative qui vise `a aligner les matricesde noyaux calculées sur les bases des données provenant de deux taches.La seconde représente une approche linéaire qui tente de découvrir unplongement pour les deux taches minimisant la distance entre les distributionsde probabilité correspondantes, tout en préservant la propriété depositivité.Nous avons également propos´e un cadre théorique bas´e sur les plongementsHilbert-Schmidt. Cela nous permet d’améliorer les résultats théoriquesde l’adaptation au domaine, en introduisant une mesure de distancenaturelle et intuitive avec de fortes garanties de calcul pour son estimation.Les résultats propos´es combinent l’etancheite des bornes de la théoried’apprentissage de Rademacher tout en assurant l’estimation efficace deses facteurs cl´es.Les contributions théoriques et algorithmiques proposées ont et évaluéessur un ensemble de données de référence dans le domaine avec des résultatsprometteurs.The ability of a human being to extrapolate previously gained knowledge to other domains inspired a new family of methods in machine learning called transfer learning. Transfer learning is often based on the assumption that objects in both target and source domains share some common feature and/or data space. If this assumption is false, most of transfer learning algorithms are likely to fail. In this thesis we propose to investigate the problem of transfer learning from both theoretical and applicational points of view.First, we present two different methods to solve the problem of unsuper-vised transfer learning based on Non-negative matrix factorization tech-niques. First one proceeds using an iterative optimization procedure that aims at aligning the kernel matrices calculated based on the data from two tasks. Second one represents a linear approach that aims at discovering an embedding for two tasks that decreases the distance between the cor-responding probability distributions while preserving the non-negativity property.We also introduce a theoretical framework based on the Hilbert-Schmidt embeddings that allows us to improve the current state-of-the-art theo-retical results on transfer learning by introducing a natural and intuitive distance measure with strong computational guarantees for its estimation. The proposed results combine the tightness of data-dependent bounds de-rived from Rademacher learning theory while ensuring the efficient esti-mation of its key factors.Both theoretical contributions and the proposed methods were evaluated on a benchmark computer vision data set with promising results. Finally, we believe that the research direction chosen in this thesis may have fruit-ful implications in the nearest future

    On Fair Cost Sharing Games in Machine Learning

    No full text
    International audienceMachine learning and game theory are known to exhibit a very strong link as they mutually provide each other with solutions and models allowing to study and analyze the optimal behaviour of a set of agents. In this paper, we take a closer look at a special class of games, known as fair cost sharing games, from a machine learning perspective. We show that this particular kind of games, where agents can choose between selfish behaviour and cooperation with shared costs, has a natural link to several machine learning scenarios including collaborative learning with homogeneous and heterogeneous sources of data. We further demonstrate how the game-theoretical results bounding the ratio between the best Nash equilibrium (or its approximate counterpart) and the optimal solution of a given game can be used to provide the upper bound of the gain achievable by the collaborative learning expressed as the expected risk and the sample complexity for homogeneous and heterogeneous cases, respectively. We believe that the established link can spur many possible future implications for other learning scenarios as well, with privacy-aware learning being among the most noticeable examples

    Revisiting (ε, γ, τ)-similarity learning for domain adaptation

    No full text
    International audienceSimilarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a (, γ, τ)−good similarity learning framework in the context of supervised classification and has been shown to have important theoretical guarantees. In this paper , we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an (, γ)−good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source domain support contains that of the target then a notable improvement of the adaptation is achievable

    Analyse théorique de l’apprentissage avec des fonctions de similarités pour l’adaptation de domaine

    No full text
    International audienceSimilarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a (, γ, τ)−good similarity learning framework in the context of supervised classification and has been shown to have important theoretical guarantees. In this paper , we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an (, γ)−good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source domain support contains that of the target then a notable improvement of the adaptation is achievable

    Margin-aware Adversarial Domain Adaptation with Optimal Transport

    No full text
    International audienceIn this paper, we propose a new theoretical analysis of unsupervised domain adaptation (DA) that relates notions of large margin separation, ad-versarial learning and optimal transport. This analysis generalizes previous work on the subject by providing a bound on the target margin violation rate, thus reflecting a better control of the quality of separation between classes in the target domain than bounding the misclassification rate. The bound also highlights the benefit of a large margin separation on the source domain for adaptation and introduces an optimal transport (OT) based distance between domains that has the virtue of being task-dependent, contrary to other approaches. From the obtained theoretical results, we derive a novel algorithmic solution for domain adaptation that introduces a novel shallow OT-based adversarial approach and outperforms other OT-based DA baselines on several simulated and real-world classification tasks
    corecore