32,410 research outputs found
Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning
Deep neural networks require a large amount of labeled training data during
supervised learning. However, collecting and labeling so much data might be
infeasible in many cases. In this paper, we introduce a source-target selective
joint fine-tuning scheme for improving the performance of deep learning tasks
with insufficient training data. In this scheme, a target learning task with
insufficient training data is carried out simultaneously with another source
learning task with abundant training data. However, the source learning task
does not use all existing training data. Our core idea is to identify and use a
subset of training images from the original source learning task whose
low-level characteristics are similar to those from the target learning task,
and jointly fine-tune shared convolutional layers for both tasks. Specifically,
we compute descriptors from linear or nonlinear filter bank responses on
training images from both tasks, and use such descriptors to search for a
desired subset of training samples for the source learning task.
Experiments demonstrate that our selective joint fine-tuning scheme achieves
state-of-the-art performance on multiple visual classification tasks with
insufficient training data for deep learning. Such tasks include Caltech 256,
MIT Indoor 67, Oxford Flowers 102 and Stanford Dogs 120. In comparison to
fine-tuning without a source domain, the proposed method can improve the
classification accuracy by 2% - 10% using a single model.Comment: To appear in 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017
Many Task Learning with Task Routing
Typical multi-task learning (MTL) methods rely on architectural adjustments
and a large trainable parameter set to jointly optimize over several tasks.
However, when the number of tasks increases so do the complexity of the
architectural adjustments and resource requirements. In this paper, we
introduce a method which applies a conditional feature-wise transformation over
the convolutional activations that enables a model to successfully perform a
large number of tasks. To distinguish from regular MTL, we introduce Many Task
Learning (MaTL) as a special case of MTL where more than 20 tasks are performed
by a single model. Our method dubbed Task Routing (TR) is encapsulated in a
layer we call the Task Routing Layer (TRL), which applied in an MaTL scenario
successfully fits hundreds of classification tasks in one model. We evaluate
our method on 5 datasets against strong baselines and state-of-the-art
approaches.Comment: 8 Pages, 5 Figures, 2 Table
Listening to the World Improves Speech Command Recognition
We study transfer learning in convolutional network architectures applied to
the task of recognizing audio, such as environmental sound events and speech
commands. Our key finding is that not only is it possible to transfer
representations from an unrelated task like environmental sound classification
to a voice-focused task like speech command recognition, but also that doing so
improves accuracies significantly. We also investigate the effect of increased
model capacity for transfer learning audio, by first validating known results
from the field of Computer Vision of achieving better accuracies with
increasingly deeper networks on two audio datasets: UrbanSound8k and the newly
released Google Speech Commands dataset. Then we propose a simple multiscale
input representation using dilated convolutions and show that it is able to
aggregate larger contexts and increase classification performance. Further, the
models trained using a combination of transfer learning and multiscale input
representations need only 40% of the training data to achieve similar
accuracies as a freshly trained model with 100% of the training data. Finally,
we demonstrate a positive interaction effect for the multiscale input and
transfer learning, making a case for the joint application of the two
techniques.Comment: 8 page
- …