1,878,073 research outputs found
Instance-based Deep Transfer Learning
Deep transfer learning recently has acquired significant research interest.
It makes use of pre-trained models that are learned from a source domain, and
utilizes these models for the tasks in a target domain. Model-based deep
transfer learning is probably the most frequently used method. However, very
little research work has been devoted to enhancing deep transfer learning by
focusing on the influence of data. In this paper, we propose an instance-based
approach to improve deep transfer learning in a target domain. Specifically, we
choose a pre-trained model from a source domain and apply this model to
estimate the influence of training samples in a target domain. Then we optimize
the training data of the target domain by removing the training samples that
will lower the performance of the pre-trained model. We later either fine-tune
the pre-trained model with the optimized training data in the target domain, or
build a new model which is initialized partially based on the pre-trained
model, and fine-tune it with the optimized training data in the target domain.
Using this approach, transfer learning can help deep learning models to capture
more useful features. Extensive experiments demonstrate the effectiveness of
our approach on boosting the quality of deep learning models for some common
computer vision tasks, such as image classification.Comment: Accepted to WACV 2019. This is a preprint versio
Dissimilarity-based Ensembles for Multiple Instance Learning
In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.Comment: Submitted to IEEE Transactions on Neural Networks and Learning
Systems, Special Issue on Learning in Non-(geo)metric Space
Recommended from our members
Incremental learning of independent, overlapping, and graded concept descriptions with an instance-based process framework
Supervised learning algorithms make several simplifying assumptions concerning the characteristics of the concept descriptions to be learned. For example, concepts are often assumed to be (1) defined with respect to the same set of relevant attributes, (2) disjoint in instance space, and (3) have uniform instance distributions. While these assumptions constrain the learning task, they unfortunately limit an algorithm's applicability. We believe that supervised learning algorithms should learn attribute relevancies independently for each concept, allow instances to be members of any subset of concepts, and represent graded concept descriptions. This paper introduces a process framework for instance-based learning algorithms that exploit only specific instance and performance feedback information to guide their concept learning processes. We also introduce Bloom, a specific instantiation of this framework. Bloom is a supervised, incremental, instance-based learning algorithm that learns relative attribute relevancies independently for each concept, allows instances to be members of any subset of concepts, and represents graded concept memberships. We describe empirical evidence to support our claims that Bloom can learn independent, overlapping, and graded concept descriptions
Do not forget: Full memory in memory-based learning of word pronunciation
Memory-based learning, keeping full memory of learning material, appears a
viable approach to learning NLP tasks, and is often superior in generalisation
accuracy to eager learning approaches that abstract from learning material.
Here we investigate three partial memory-based learning approaches which remove
from memory specific task instance types estimated to be exceptional. The three
approaches each implement one heuristic function for estimating exceptionality
of instance types: (i) typicality, (ii) class prediction strength, and (iii)
friendly-neighbourhood size. Experiments are performed with the memory-based
learning algorithm IB1-IG trained on English word pronunciation. We find that
removing instance types with low prediction strength (ii) is the only tested
method which does not seriously harm generalisation accuracy. We conclude that
keeping full memory of types rather than tokens, and excluding minority
ambiguities appear to be the only performance-preserving optimisations of
memory-based learning.Comment: uses conll98, epsf, and ipamacs (WSU IPA
Recommended from our members
Comparing instance-averaging with instance-saving learning algorithms
The goal of our research is to understand the power and appropriateness of instance-based representations and their associated acquisition methods. This paper concerns two methods for reducing storage requirements for instance-based learning algorithms. The first method, termed instance-saving, represents concept descriptions by selecting and storing a representative subset of the given training instances. We provide an analysis for instance-saving techniques and specify one general class of concepts that instance-saving algorithms are capable of learning. The second method, termed instance-averaging, represents concept descriptions by averaging together some training instances while simply saving others. We describe why analyses for instance-averaging algorithms are difficult to produce. Our empirical results indicate that storage requirements for these two methods are roughly equivalent. We outline the assumptions of instance-averaging algorithms and describe how their violation might degrade performance. To mitigate the effects of non-convex concepts, a dynamic thresholding technique is introduced and applied in both the averaging and non-averaging learning algorithms. Thresholding increases the storage requirements but also increases the quality of the resulting concept descriptions
An Active Instance-based Machine Learning method for Stellar Population Studies
We have developed a method for fast and accurate stellar population
parameters determination in order to apply it to high resolution galaxy
spectra. The method is based on an optimization technique that combines active
learning with an instance-based machine learning algorithm. We tested the
method with the retrieval of the star-formation history and dust content in
"synthetic" galaxies with a wide range of S/N ratios. The "synthetic" galaxies
where constructed using two different grids of high resolution theoretical
population synthesis models. The results of our controlled experiment shows
that our method can estimate with good speed and accuracy the parameters of the
stellar populations that make up the galaxy even for very low S/N input. For a
spectrum with S/N=5 the typical average deviation between the input and fitted
spectrum is less than 10**{-5}. Additional improvements are achieved using
prior knowledge.Comment: 14 pages, 25 figures, accepted by Monthly Notice
- …
