73 research outputs found
Cats or CAT scans: transfer learning from natural or medical image source datasets?
Transfer learning is a widely used strategy in medical image analysis.
Instead of only training a network with a limited amount of data from the
target task of interest, we can first train the network with other, potentially
larger source datasets, creating a more robust model. The source datasets do
not have to be related to the target task. For a classification task in lung CT
images, we could use both head CT images, or images of cats, as the source.
While head CT images appear more similar to lung CT images, the number and
diversity of cat images might lead to a better model overall. In this survey we
review a number of papers that have performed similar comparisons. Although the
answer to which strategy is best seems to be "it depends", we discuss a number
of research directions we need to take as a community, to gain more
understanding of this topic.Comment: Accepted to Current Opinion in Biomedical Engineerin
On Classification with Bags, Groups and Sets
Many classification problems can be difficult to formulate directly in terms
of the traditional supervised setting, where both training and test samples are
individual feature vectors. There are cases in which samples are better
described by sets of feature vectors, that labels are only available for sets
rather than individual samples, or, if individual labels are available, that
these are not independent. To better deal with such problems, several
extensions of supervised learning have been proposed, where either training
and/or test objects are sets of feature vectors. However, having been proposed
rather independently of each other, their mutual similarities and differences
have hitherto not been mapped out. In this work, we provide an overview of such
learning scenarios, propose a taxonomy to illustrate the relationships between
them, and discuss directions for further research in these areas
Dissimilarity-based Ensembles for Multiple Instance Learning
In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.Comment: Submitted to IEEE Transactions on Neural Networks and Learning
Systems, Special Issue on Learning in Non-(geo)metric Space
Multiple Instance Learning: A Survey of Problem Characteristics and Applications
Multiple instance learning (MIL) is a form of weakly supervised learning
where training instances are arranged in sets, called bags, and a label is
provided for the entire bag. This formulation is gaining interest because it
naturally fits various problems and allows to leverage weakly labeled data.
Consequently, it has been used in diverse application fields such as computer
vision and document classification. However, learning from bags raises
important challenges that are unique to MIL. This paper provides a
comprehensive survey of the characteristics which define and differentiate the
types of MIL problems. Until now, these problem characteristics have not been
formally identified and described. As a result, the variations in performance
of MIL algorithms from one data set to another are difficult to explain. In
this paper, MIL problem characteristics are grouped into four broad categories:
the composition of the bags, the types of data distribution, the ambiguity of
instance labels, and the task to be performed. Methods specialized to address
each category are reviewed. Then, the extent to which these characteristics
manifest themselves in key MIL application areas are described. Finally,
experiments are conducted to compare the performance of 16 state-of-the-art MIL
methods on selected problem characteristics. This paper provides insight on how
the problem characteristics affect MIL algorithms, recommendations for future
benchmarking and promising avenues for research
Predicting Scores of Medical Imaging Segmentation Methods with Meta-Learning
Deep learning has led to state-of-the-art results for many medical imaging
tasks, such as segmentation of different anatomical structures. With the
increased numbers of deep learning publications and openly available code, the
approach to choosing a model for a new task becomes more complicated, while
time and (computational) resources are limited. A possible solution to choosing
a model efficiently is meta-learning, a learning method in which prior
performance of a model is used to predict the performance for new tasks. We
investigate meta-learning for segmentation across ten datasets of different
organs and modalities. We propose four ways to represent each dataset by
meta-features: one based on statistical features of the images and three are
based on deep learning features. We use support vector regression and deep
neural networks to learn the relationship between the meta-features and prior
model performance. On three external test datasets these methods give Dice
scores within 0.10 of the true performance. These results demonstrate the
potential of meta-learning in medical imaging
Exploring the similarity of medical imaging classification problems
Supervised learning is ubiquitous in medical image analysis. In this paper we
consider the problem of meta-learning -- predicting which methods will perform
well in an unseen classification problem, given previous experience with other
classification problems. We investigate the first step of such an approach: how
to quantify the similarity of different classification problems. We
characterize datasets sampled from six classification problems by performance
ranks of simple classifiers, and define the similarity by the inverse of
Euclidean distance in this meta-feature space. We visualize the similarities in
a 2D space, where meaningful clusters start to emerge, and show that the
proposed representation can be used to classify datasets according to their
origin with 89.3\% accuracy. These findings, together with the observations of
recent trends in machine learning, suggest that meta-learning could be a
valuable tool for the medical imaging community
Revisiting Hidden Representations in Transfer Learning for Medical Imaging
While a key component to the success of deep learning is the availability of
massive amounts of training data, medical image datasets are often limited in
diversity and size. Transfer learning has the potential to bridge the gap
between related yet different domains. For medical applications, however, it
remains unclear whether it is more beneficial to pre-train on natural or
medical images. We aim to shed light on this problem by comparing
initialization on ImageNet and RadImageNet on seven medical classification
tasks. We investigate their learned representations with Canonical Correlation
Analysis (CCA) and compare the predictions of the different models. We find
that overall the models pre-trained on ImageNet outperform those trained on
RadImageNet. Our results show that, contrary to intuition, ImageNet and
RadImageNet converge to distinct intermediate representations, and that these
representations are even more dissimilar after fine-tuning. Despite these
distinct representations, the predictions of the models remain similar. Our
findings challenge the notion that transfer learning is effective due to the
reuse of general features in the early layers of a convolutional neural network
and show that weight similarity before and after fine-tuning is negatively
related to performance gains.Comment: Submitted to the CHIL 2023 Track 2: Applications and Practic
- …