119,850 research outputs found
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Transfer Learning for Speech and Language Processing
Transfer learning is a vital technique that generalizes models trained for
one setting or task to other settings or tasks. For example in speech
recognition, an acoustic model trained for one language can be used to
recognize speech in another language, with little or no re-training data.
Transfer learning is closely related to multi-task learning (cross-lingual vs.
multilingual), and is traditionally studied in the name of `model adaptation'.
Recent advance in deep learning shows that transfer learning becomes much
easier and more effective with high-level abstract features learned by deep
models, and the `transfer' can be conducted not only between data distributions
and data types, but also between model structures (e.g., shallow nets and deep
nets) or even model types (e.g., Bayesian models and neural models). This
review paper summarizes some recent prominent research towards this direction,
particularly for speech and language processing. We also report some results
from our group and highlight the potential of this very interesting research
field.Comment: 13 pages, APSIPA 201
Structure propagation for zero-shot learning
The key of zero-shot learning (ZSL) is how to find the information transfer
model for bridging the gap between images and semantic information (texts or
attributes). Existing ZSL methods usually construct the compatibility function
between images and class labels with the consideration of the relevance on the
semantic classes (the manifold structure of semantic classes). However, the
relationship of image classes (the manifold structure of image classes) is also
very important for the compatibility model construction. It is difficult to
capture the relationship among image classes due to unseen classes, so that the
manifold structure of image classes often is ignored in ZSL. To complement each
other between the manifold structure of image classes and that of semantic
classes information, we propose structure propagation (SP) for improving the
performance of ZSL for classification. SP can jointly consider the manifold
structure of image classes and that of semantic classes for approximating to
the intrinsic structure of object classes. Moreover, the SP can describe the
constrain condition between the compatibility function and these manifold
structures for balancing the influence of the structure propagation iteration.
The SP solution provides not only unseen class labels but also the relationship
of two manifold structures that encode the positive transfer in structure
propagation. Experimental results demonstrate that SP can attain the promising
results on the AwA, CUB, Dogs and SUN databases
- …