389,429 research outputs found

    Distribution-Based Categorization of Classifier Transfer Learning

    Get PDF
    Transfer Learning (TL) aims to transfer knowledge acquired in one problem, the source problem, onto another problem, the target problem, dispensing with the bottom-up construction of the target model. Due to its relevance, TL has gained significant interest in the Machine Learning community since it paves the way to devise intelligent learning models that can easily be tailored to many different applications. As it is natural in a fast evolving area, a wide variety of TL methods, settings and nomenclature have been proposed so far. However, a wide range of works have been reporting different names for the same concepts. This concept and terminology mixture contribute however to obscure the TL field, hindering its proper consideration. In this paper we present a review of the literature on the majority of classification TL methods, and also a distribution-based categorization of TL with a common nomenclature suitable to classification problems. Under this perspective three main TL categories are presented, discussed and illustrated with examples

    'We nicked stuff from all over the place': policy transfer or muddling through?

    Get PDF
    This article explores current thinking about policy learning and transfer, using recent work on the 'Americanisation' of UK active labour market policies as a focus of discussion. While it is clear that the UK has learned from the US in certain respects, academic debates about the US-UK policy relationship are marked by accounts of learning and transfer that depend on a highly rational interpretation of these processes. The article reviews current debates in the policy transfer literature and applies a critical view of policy learning and transfer to key accounts of labour market activation policies before moving on to consider how useful the concept of policy transfer really is in an increasingly complex, plural and 'de-institutionalising' world

    Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer

    Full text link
    Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.This setting neglects the more practical scenario where training data are collected from multiple sources. This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains, termed Multi-source Active Domain Adaptation (MADA). Not surprisingly, we find that most traditional ADA methods cannot work directly in such a setting, mainly due to the excessive domain gap introduced by all the source domains and thus their uncertainty-aware sample selection can easily become miscalibrated under the multi-domain shifts. Considering this, we propose a Dynamic integrated uncertainty valuation framework(Detective) that comprehensively consider the domain shift between multi-source domains and target domain to detect the informative target samples. Specifically, the leverages a dynamic Domain Adaptation(DA) model that learns how to adapt the model's parameters to fit the union of multi-source domains. This enables an approximate single-source domain modeling by the dynamic model. We then comprehensively measure both domain uncertainty and predictive uncertainty in the target domain to detect informative target samples using evidential deep learning, thereby mitigating uncertainty miscalibration. Furthermore, we introduce a contextual diversity-aware calculator to enhance the diversity of the selected samples. Experiments demonstrate that our solution outperforms existing methods by a considerable margin on three domain adaptation benchmarks.Comment: arXiv admin note: text overlap with arXiv:2302.13824 by other author
    • …
    corecore