17 research outputs found
Distribution-Based Categorization of Classifier Transfer Learning
Transfer Learning (TL) aims to transfer knowledge acquired in one problem,
the source problem, onto another problem, the target problem, dispensing with
the bottom-up construction of the target model. Due to its relevance, TL has
gained significant interest in the Machine Learning community since it paves
the way to devise intelligent learning models that can easily be tailored to
many different applications. As it is natural in a fast evolving area, a wide
variety of TL methods, settings and nomenclature have been proposed so far.
However, a wide range of works have been reporting different names for the same
concepts. This concept and terminology mixture contribute however to obscure
the TL field, hindering its proper consideration. In this paper we present a
review of the literature on the majority of classification TL methods, and also
a distribution-based categorization of TL with a common nomenclature suitable
to classification problems. Under this perspective three main TL categories are
presented, discussed and illustrated with examples
Transfer Learning using Computational Intelligence: A Survey
Abstract Transfer learning aims to provide a framework to utilize previously-acquired knowledge to solve new but similar problems much more quickly and effectively. In contrast to classical machine learning methods, transfer learning methods exploit the knowledge accumulated from data in auxiliary domains to facilitate predictive modeling consisting of different data patterns in the current domain. To improve the performance of existing transfer learning methods and handle the knowledge transfer process in real-world systems, ..
Stacked Denoising Autoencoders and Transfer Learning for Immunogold Particles Detection and Recognition
In this paper we present a system for the detection of immunogold particles
and a Transfer Learning (TL) framework for the recognition of these immunogold
particles. Immunogold particles are part of a high-magnification method for the
selective localization of biological molecules at the subcellular level only
visible through Electron Microscopy. The number of immunogold particles in the
cell walls allows the assessment of the differences in their compositions
providing a tool to analise the quality of different plants. For its
quantization one requires a laborious manual labeling (or annotation) of images
containing hundreds of particles. The system that is proposed in this paper can
leverage significantly the burden of this manual task.
For particle detection we use a LoG filter coupled with a SDA. In order to
improve the recognition, we also study the applicability of TL settings for
immunogold recognition. TL reuses the learning model of a source problem on
other datasets (target problems) containing particles of different sizes. The
proposed system was developed to solve a particular problem on maize cells,
namely to determine the composition of cell wall ingrowths in endosperm
transfer cells. This novel dataset as well as the code for reproducing our
experiments is made publicly available.
We determined that the LoG detector alone attained more than 84\% of accuracy
with the F-measure. Developing immunogold recognition with TL also provided
superior performance when compared with the baseline models augmenting the
accuracy rates by 10\%
Transfer learning in hierarchical feature spaces
© 2015 IEEE. Transfer learning provides an approach to solve target tasks more quickly and effectively by using previously acquired knowledge learned from source tasks. As one category of transfer learning approaches, feature-based transfer learning approaches aim to find a latent feature space shared between source and target domains. The issue is that the sole feature space can't exploit the relationship of source domain and target domain fully. To deal with this issue, this paper proposes a transfer learning method that uses deep learning to extract hierarchical feature spaces, so knowledge of source domain can be exploited and transferred in multiple feature spaces with different levels of abstraction. In the experiment, the effectiveness of transfer learning in multiple feature spaces is compared and this can help us find the optimal feature space for transfer learning
Neurogenesis Deep Learning
Neural machine learning methods, such as deep neural networks (DNN), have
achieved remarkable success in a number of complex data processing tasks. These
methods have arguably had their strongest impact on tasks such as image and
audio processing - data processing domains in which humans have long held clear
advantages over conventional algorithms. In contrast to biological neural
systems, which are capable of learning continuously, deep artificial networks
have a limited ability for incorporating new information in an already trained
network. As a result, methods for continuous learning are potentially highly
impactful in enabling the application of deep networks to dynamic data sets.
Here, inspired by the process of adult neurogenesis in the hippocampus, we
explore the potential for adding new neurons to deep layers of artificial
neural networks in order to facilitate their acquisition of novel information
while preserving previously trained data representations. Our results on the
MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes
lower and upper case letters and digits, demonstrate that neurogenesis is well
suited for addressing the stability-plasticity dilemma that has long challenged
adaptive machine learning algorithms.Comment: 8 pages, 8 figures, Accepted to 2017 International Joint Conference
on Neural Networks (IJCNN 2017
Towards Making Deep Transfer Learning Never Hurt
Transfer learning have been frequently used to improve deep neural network
training through incorporating weights of pre-trained networks as the
starting-point of optimization for regularization. While deep transfer learning
can usually boost the performance with better accuracy and faster convergence,
transferring weights from inappropriate networks hurts training procedure and
may lead to even lower accuracy. In this paper, we consider deep transfer
learning as minimizing a linear combination of empirical loss and regularizer
based on pre-trained weights, where the regularizer would restrict the training
procedure from lowering the empirical loss, with conflicted descent directions
(e.g., derivatives). Following the view, we propose a novel strategy making
regularization-based Deep Transfer learning Never Hurt (DTNH) that, for each
iteration of training procedure, computes the derivatives of the two terms
separately, then re-estimates a new descent direction that does not hurt the
empirical loss minimization while preserving the regularization affects from
the pre-trained weights. Extensive experiments have been done using common
transfer learning regularizers, such as L2-SP and knowledge distillation, on
top of a wide range of deep transfer learning benchmarks including Caltech, MIT
indoor 67, CIFAR-10 and ImageNet. The empirical results show that the proposed
descent direction estimation strategy DTNH can always improve the performance
of deep transfer learning tasks based on all above regularizers, even when
transferring pre-trained weights from inappropriate networks. All in all, DTNH
strategy can improve state-of-the-art regularizers in all cases with 0.1%--7%
higher accuracy in all experiments.Comment: 10 page