85,496 research outputs found
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Unsupervised Domain Adaptation on Reading Comprehension
Reading comprehension (RC) has been studied in a variety of datasets with the
boosted performance brought by deep neural networks. However, the
generalization capability of these models across different domains remains
unclear. To alleviate this issue, we are going to investigate unsupervised
domain adaptation on RC, wherein a model is trained on labeled source domain
and to be applied to the target domain with only unlabeled samples. We first
show that even with the powerful BERT contextual representation, the
performance is still unsatisfactory when the model trained on one dataset is
directly applied to another target dataset. To solve this, we provide a novel
conditional adversarial self-training method (CASe). Specifically, our approach
leverages a BERT model fine-tuned on the source dataset along with the
confidence filtering to generate reliable pseudo-labeled samples in the target
domain for self-training. On the other hand, it further reduces domain
distribution discrepancy through conditional adversarial learning across
domains. Extensive experiments show our approach achieves comparable accuracy
to supervised models on multiple large-scale benchmark datasets.Comment: 8 pages, 6 figures, 5 tables, Accepted by AAAI 202
Transfer learning for time series classification
Transfer learning for deep neural networks is the process of first training a
base network on a source dataset, and then transferring the learned features
(the network's weights) to a second network to be trained on a target dataset.
This idea has been shown to improve deep neural network's generalization
capabilities in many computer vision tasks such as image recognition and object
localization. Apart from these applications, deep Convolutional Neural Networks
(CNNs) have also recently gained popularity in the Time Series Classification
(TSC) community. However, unlike for image recognition problems, transfer
learning techniques have not yet been investigated thoroughly for the TSC task.
This is surprising as the accuracy of deep learning models for TSC could
potentially be improved if the model is fine-tuned from a pre-trained neural
network instead of training it from scratch. In this paper, we fill this gap by
investigating how to transfer deep CNNs for the TSC task. To evaluate the
potential of transfer learning, we performed extensive experiments using the
UCR archive which is the largest publicly available TSC benchmark containing 85
datasets. For each dataset in the archive, we pre-trained a model and then
fine-tuned it on the other datasets resulting in 7140 different deep neural
networks. These experiments revealed that transfer learning can improve or
degrade the model's predictions depending on the dataset used for transfer.
Therefore, in an effort to predict the best source dataset for a given target
dataset, we propose a new method relying on Dynamic Time Warping to measure
inter-datasets similarities. We describe how our method can guide the transfer
to choose the best source dataset leading to an improvement in accuracy on 71
out of 85 datasets.Comment: Accepted at IEEE International Conference on Big Data 201
- …