3 research outputs found

    A Principled Approach for Learning Task Similarity in Multitask Learning

    Full text link
    Multitask learning aims at solving a set of related tasks simultaneously, by exploiting the shared knowledge for improving the performance on individual tasks. Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks. Previous works have incorporated this similarity information explicitly (e.g., weighted loss for each task) or implicitly (e.g., adversarial loss for feature adaptation), for achieving good empirical performances. However, the theoretical motivations for adding task similarity knowledge are often missing or incomplete. In this paper, we give a different perspective from a theoretical point of view to understand this practice. We first provide an upper bound on the generalization error of multitask learning, showing the benefit of explicit and implicit task similarity knowledge. We systematically derive the bounds based on two distinct task similarity metrics: H divergence and Wasserstein distance. From these theoretical results, we revisit the Adversarial Multi-task Neural Network, proposing a new training algorithm to learn the task relation coefficients and neural network parameters iteratively. We assess our new algorithm empirically on several benchmarks, showing not only that we find interesting and robust task relations, but that the proposed approach outperforms the baselines, reaffirming the benefits of theoretical insight in algorithm design

    Dissimilarity Measure Machines

    Get PDF
    This paper presents a dissimilarity-based discriminative framework for learning from data coming in the form of probability distributions. Departing from the use of positive kernel-based methods, we build upon embeddings based on dissimilarities tailored for distribution. We enable this by extending \citet{balcan2008theory}'s theory of learning with similarity functions to the case of distribution-shaped data. Then, we show that several learning guarantees of the dissimilarity still hold when estimated from empirical distributions. Algorithmically, the proposed approach consists in building features from pairwise dissimilarities and in learning a linear decision function in this new feature space. Our experimental results show that this dissimilarity-based approach works better than the so-called support measure machines or the sliced Wasserstein kernel, and that among several dissimilarities including Kullback-Leibler divergence and Maximum Mean Discrepancy, the entropy-regularized Wasserstein distance provides the best compromise between computational efficiency and accuracy
    corecore