9 research outputs found
Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets
Deep learning approaches applied to medical imaging have reached near-human
or better-than-human performance on many diagnostic tasks. For instance, the
CheXpert competition on detecting pathologies in chest x-rays has shown
excellent multi-class classification performance. However, training and
validating deep learning models require extensive collections of images and
still produce false inferences, as identified by a human-in-the-loop. In this
paper, we introduce a practical approach to improve the predictions of a
pre-trained model through Few-Shot Learning (FSL). After training and
validating a model, a small number of false inference images are collected to
retrain the model using \textbf{\textit{Image Triplets}} - a false positive or
false negative, a true positive, and a true negative. The retrained FSL model
produces considerable gains in performance with only a few epochs and few
images. In addition, FSL opens rapid retraining opportunities for
human-in-the-loop systems, where a radiologist can relabel false inferences,
and the model can be quickly retrained. We compare our retrained model
performance with existing FSL approaches in medical imaging that train and
evaluate models at once
Simple but Effective Unsupervised Classification for Specified Domain Images: A Case Study on Fungi Images
High-quality labeled datasets are essential for deep learning. Traditional
manual annotation methods are not only costly and inefficient but also pose
challenges in specialized domains where expert knowledge is needed.
Self-supervised methods, despite leveraging unlabeled data for feature
extraction, still require hundreds or thousands of labeled instances to guide
the model for effective specialized image classification. Current unsupervised
learning methods offer automatic classification without prior annotation but
often compromise on accuracy. As a result, efficiently procuring high-quality
labeled datasets remains a pressing challenge for specialized domain images
devoid of annotated data. Addressing this, an unsupervised classification
method with three key ideas is introduced: 1) dual-step feature dimensionality
reduction using a pre-trained model and manifold learning, 2) a voting
mechanism from multiple clustering algorithms, and 3) post-hoc instead of prior
manual annotation. This approach outperforms supervised methods in
classification accuracy, as demonstrated with fungal image data, achieving
94.1% and 96.7% on public and private datasets respectively. The proposed
unsupervised classification method reduces dependency on pre-annotated
datasets, enabling a closed-loop for data classification. The simplicity and
ease of use of this method will also bring convenience to researchers in
various fields in building datasets, promoting AI applications for images in
specialized domains
Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets
Deep learning approaches applied to medical imaging have reached near-human or better-than-human performance on many diagnostic tasks. For instance, the CheXpert competition on detecting pathologies in chest x-rays has shown excellent multi-class classification performance. However, training and validating deep learning models require extensive collections of images and still produce false inferences, as identified by a human-in-the-loop. In this paper, we introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning (FSL). After training and validating a model, a small number of false inference images are collected to retrain the model using \textbf{\textit{Image Triplets}} - a false positive or false negative, a true positive, and a true negative. The retrained FSL model produces considerable gains in performance with only a few epochs and few images. In addition, FSL opens rapid retraining opportunities for human-in-the-loop systems, where a radiologist can relabel false inferences, and the model can be quickly retrained. We compare our retrained model performance with existing FSL approaches in medical imaging that train and evaluate models at once
DBDC-SSL: Deep Brownian Distance Covariance with Self-supervised Learning for Few-shot Image Classification
Few-shot image classification remains a persistent challenge due to the intrinsic difficulty
faced by visual recognition models in achieving generalization with limited training data. Existing methods
primarily focus on exploiting marginal distributions and overlook the disparity between the product of
marginals and the joint characteristic functions. This can lead to less robust feature representations. In this
paper, we introduce DBDC-SSL, a method that aims to improve few-shot visual recognition models by
learning a feature extractor that produces image representations that are more robust. To improve the
robustness of the model, we integrate DeepBDC (DBDC) during the training process to learn better
feature embeddings by effectively computing the disparity between product of the marginals and joint
characteristic functions of the features. To reduce overfitting and improve the generalization of the model,
we utilize an auxiliary rotation loss for self-supervised learning (SSL) in the training of the feature
extractor. The auxiliary rotation loss is derived from a pretext task, where input images undergo rotation
by predefined angles, and the model classifies the rotation angle based on the features it generates.
Experimental results demonstrate that DBDC-SSL is able to outperform current state-of-the-art methods on
4 common few-shot image classification benchmark, which are miniImageNet, tieredImageNet, CUB and
CIFAR-FS. For 5-way 1-shot and 5-way 5-shot tasks respectively, the proposed DBDC-SSL achieved the
accuracy of 68.64±0.43 and 86.02±0.28 on miniImageNet, 73.88±0.48 and 89.03±0.29 on tieredImageNet,
84.67±0.39 and 94.76±0.16 on CUB, and 75.60±0.44 and 88.49±0.31 on CIFAR-FS