16 research outputs found

    Hybrid heterogeneous transfer learning through deep learning

    Full text link
    Copyright © 2014, Association for the Advancement of Artificial Intelligence. Most previous heterogeneous transfer learning methods learn a cross-domain feature mapping between heterogeneous feature spaces based on a few cross-domain instance-correspondences, and these corresponding instances are assumed to be representative in the source and target domains respectively. However, in many realworld scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise due to the bias issue of the correspondences in the target or (and) source domain(s). In this case, a classifier trained on the labeled transformed-sourcedomain data may not be useful for the target domain. In this paper, we present a new transfer learning framework called Hybrid Heterogeneous Transfer Learning (HHTL), which allows the corresponding instances across domains to be biased in either the source or target domain. Specifically, we propose a deep learning approach to learn a feature mapping between crossdomain heterogeneous features as well as a better feature representation for mapped data to reduce the bias issue caused by the cross-domain correspondences. Extensive experiments on several multilingual sentiment classification tasks verify the effectiveness of our proposed approach compared with some baseline methods

    Transfer Learning for Speech and Language Processing

    Full text link
    Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field.Comment: 13 pages, APSIPA 201

    Optimal Projection Guided Transfer Hashing for Image Retrieval

    Full text link
    Recently, learning to hash has been widely studied for image retrieval thanks to the computation and storage efficiency of binary codes. For most existing learning to hash methods, sufficient training images are required and used to learn precise hashing codes. However, in some real-world applications, there are not always sufficient training images in the domain of interest. In addition, some existing supervised approaches need a amount of labeled data, which is an expensive process in term of time, label and human expertise. To handle such problems, inspired by transfer learning, we propose a simple yet effective unsupervised hashing method named Optimal Projection Guided Transfer Hashing (GTH) where we borrow the images of other different but related domain i.e., source domain to help learn precise hashing codes for the domain of interest i.e., target domain. Besides, we propose to seek for the maximum likelihood estimation (MLE) solution of the hashing functions of target and source domains due to the domain gap. Furthermore,an alternating optimization method is adopted to obtain the two projections of target and source domains such that the domain hashing disparity is reduced gradually. Extensive experiments on various benchmark databases verify that our method outperforms many state-of-the-art learning to hash methods. The implementation details are available at https://github.com/liuji93/GTH

    A Novel Fuzzy Neural Network for Unsupervised Domain Adaptation in Heterogeneous Scenarios

    Full text link
    © 2019 IEEE. How to leverage knowledge from labelled domain (source) to help classify unlabeled domain (target) is a key problem in the machine learning field. Unsupervised domain adaptation (UDA) provides a solution to this problem and has been well developed for two homogeneous domains. However, when the target domain is unlabeled and heterogeneous with the source domain, current UDA models cannot accurately transfer knowledge from a source domain to a target domain. Benefiting from development of neural networks, this paper presents a new neural network, shared fuzzy equivalence relations neural network (SFER-NN), to address the heterogeneous UDA (HeUDA) problem. SFER-NN transfers knowledge across two domains according to shared fuzzy equivalence relations that can simultaneously cluster features of two domains into several categories. Based on the clustered categories, SFER-NN is constructed to minimize the discrepancy between two domains. Compared to previous works, SFER-NN is more capable of minimizing discrepancy between two domains. As a result of this advantage, SFER-NN delivers a better performance than previous studies using two public datasets
    corecore