685 research outputs found
Cats or CAT scans: transfer learning from natural or medical image source datasets?
Transfer learning is a widely used strategy in medical image analysis.
Instead of only training a network with a limited amount of data from the
target task of interest, we can first train the network with other, potentially
larger source datasets, creating a more robust model. The source datasets do
not have to be related to the target task. For a classification task in lung CT
images, we could use both head CT images, or images of cats, as the source.
While head CT images appear more similar to lung CT images, the number and
diversity of cat images might lead to a better model overall. In this survey we
review a number of papers that have performed similar comparisons. Although the
answer to which strategy is best seems to be "it depends", we discuss a number
of research directions we need to take as a community, to gain more
understanding of this topic.Comment: Accepted to Current Opinion in Biomedical Engineerin
Deep Learning based HEp-2 Image Classification: A Comprehensive Review
Classification of HEp-2 cell patterns plays a significant role in the
indirect immunofluorescence test for identifying autoimmune diseases in the
human body. Many automatic HEp-2 cell classification methods have been proposed
in recent years, amongst which deep learning based methods have shown
impressive performance. This paper provides a comprehensive review of the
existing deep learning based HEp-2 cell image classification methods. These
methods perform HEp-2 image classification at two levels, namely, cell-level
and specimen-level. Both levels are covered in this review. At each level, the
methods are organized with a deep network usage based taxonomy. The core idea,
notable achievements, and key strengths and weaknesses of each method are
critically analyzed. Furthermore, a concise review of the existing HEp-2
datasets that are commonly used in the literature is given. The paper ends with
a discussion on novel opportunities and future research directions in this
field. It is hoped that this paper would provide readers with a thorough
reference of this novel, challenging, and thriving field.Comment: Published in Medical Image Analysi
Deep Active Learning for Automatic Mitotic Cell Detection on HEp-2 Specimen Medical Images
Identifying Human Epithelial Type 2 (HEp-2) mitotic cells is a crucial procedure in anti-nuclear antibodies (ANAs) testing, which is the standard protocol for detecting connective tissue diseases (CTD). Due to the low throughput and labor-subjectivity of the ANAs' manual screening test, there is a need to develop a reliable HEp-2 computer-aided diagnosis (CAD) system. The automatic detection of mitotic cells from the microscopic HEp-2 specimen images is an essential step to support the diagnosis process and enhance the throughput of this test. This work proposes a deep active learning (DAL) approach to overcoming the cell labeling challenge. Moreover, deep learning detectors are tailored to automatically identify the mitotic cells directly in the entire microscopic HEp-2 specimen images, avoiding the segmentation step. The proposed framework is validated using the I3A Task-2 dataset over 5-fold cross-validation trials. Using the YOLO predictor, promising mitotic cell prediction results are achieved with an average of 90.011% recall, 88.307% precision, and 81.531% mAP. Whereas, average scores of 86.986% recall, 85.282% precision, and 78.506% mAP are obtained using the Faster R-CNN predictor. Employing the DAL method over four labeling rounds effectively enhances the accuracy of the data annotation, and hence, improves the prediction performance. The proposed framework could be practically applicable to support medical personnel in making rapid and accurate decisions about the mitotic cells' existence
Studying the Applicability of Generative Adversarial Networks on HEp-2 Cell Image Augmentation
The Anti-Nuclear Antibodies (ANAs) testing is the primary serological diagnosis screening test for autoimmune diseases. ANAs testing is conducted mainly by the Indirect Immunofluorescence (IIF) on Human Epithelial cell-substrate (HEp-2) protocol. However, due to its high variability, human-subjectivity, and low throughput, there is an insistent need to develop an efficient Computer-Aided Diagnosis system (CADs) to automate this protocol. Many recently proposed Convolutional Neural Networks (CNNs) demonstrated promising results in HEp-2 cell image classification, which is the main task of the HE-p2 IIF protocol. However, the lack of large labeled datasets is still the main challenge in this field. This work provides a detailed study of the applicability of using generative adversarial networks (GANs) algorithms as an augmentation method. Different types of GANs were employed to synthesize HEp-2 cell images to address the data scarcity problem. For systematic comparison, empirical quantitative metrics were implemented to evaluate different GAN models' performance of learning the real data representations. The results of this work showed that though the high visual similarity with the real images, GANs' capacity to generate diverse data is still limited. This deficiency in the generated data diversity is found to be of a crucial impact when used as a standalone method for augmentation. However, combining limited-size GANs-generated data with classic augmentation improves the classification accuracy across different variants of CNNs. Our results demonstrated a competitive performance for the overall classification accuracy and the mean class accuracy of the HEp-2 cell image classification task
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Deep Learning Model Based on ResNet-50 for Beef Quality Classification
Food quality measurement is one of the most essential topics in agriculture and industrial fields. To classify healthy food using computer visual inspection, a new architecture was proposed to classify beef images to specify the rancid and healthy ones. In traditional measurements, the specialists are not able to classify such images, due to the huge number of beef images required to build a deep learning model. In the present study, different images of beef including healthy and rancid cases were collected according to the analysis done by the Laboratory of Food Technology, Faculty of Agriculture, Kafrelsheikh University in January of 2020. The texture analysis of the beef surface of the enrolled images makes it difficult to distinguish between the rancid and healthy images. Moreover, a deep learning approach based on ResNet-50 was presented as a promising classifier to grade and classify the beef images. In this work, a limited number of images were used to present the research problem of image resource limitation; eight healthy images and ten rancid beef images. This number of images is not sufficient to be retrained using deep learning approaches. Thus, Generative Adversarial Network (GAN) was proposed to augment the enrolled images to produce one hundred eighty images. The results obtained based on ResNet-50 classification achieve accuracy of 96.03%, 91.67%, and 88.89% in the training, testing, and validation phases, respectively. Furthermore, a comparison of the current model (ResNet-50) with the classical and deep learning architecture is made to demonstrate the efficiency of ResNet-50, in image classification
BCNet: A Novel Network for Blood Cell Classification
The paper was partially supported by: Royal Society International Exchanges Cost Share Award, United Kingdom (RP202G0230); Medical Research Council Confidence in Concept Award, United Kingdom (MC_PC_17171); Hope Foundation for Cancer Research, United Kingdom (RM60G0680); British Heart Foundation Accelerator Award, United Kingdom (AA/18/3/34220); Sino-United Kingdom Industrial Fund, United Kingdom (RP202G0289); Global Challenges Research Fund (GCRF), United Kingdom (P202PF11); Guangxi Key Laboratory of Trusted Software (kx201901).Aims: Most blood diseases, such as chronic anemia, leukemia (commonly known as
blood cancer), and hematopoietic dysfunction, are caused by environmental pollution,
substandard decoration materials, radiation exposure, and long-term use certain drugs.
Thus, it is imperative to classify the blood cell images. Most cell classification is based on
the manual feature, machine learning classifier or the deep convolution network neural
model. However, manual feature extraction is a very tedious process, and the results are
usually unsatisfactory. On the other hand, the deep convolution neural network is usually
composed of massive layers, and each layer has many parameters. Therefore, each deep
convolution neural network needs a lot of time to get the results. Another problem is that
medical data sets are relatively small, which may lead to overfitting problems.
Methods: To address these problems, we propose seven models for the automatic
classification of blood cells: BCARENet, BCR5RENet, BCMV2RENet, BCRRNet,
BCRENet, BCRSNet, and BCNet. The BCNet model is the best model among the
seven proposed models. The backbone model in our method is selected as the
ResNet-18, which is pre-trained on the ImageNet set. To improve the performance of
the proposed model, we replace the last four layers of the trained transferred ResNet-18
model with the three randomized neural networks (RNNs), which are RVFL, ELM, and
SNN. The final outputs of our BCNet are generated by the ensemble of the predictions from
the three randomized neural networks by the majority voting. We use four multiclassification
indexes for the evaluation of our model.
Results: The accuracy, average precision, average F1-score, and average recall are
96.78, 97.07, 96.78, and 96.77%, respectively.
Conclusion: We offer the comparison of our model with state-of-the-art methods. The
results of the proposed BCNet model are much better than other state-of-the-art methods.Royal Society International Exchanges Cost Share Award RP202G0230Medical Research Council Confidence in Concept Award, United Kingdom MC_PC_17171Hope Foundation for Cancer Research, United Kingdom RM60G0680British Heart Foundation Accelerator Award, United Kingdom AA/18/3/34220Sino-United Kingdom Industrial Fund, United Kingdom RP202G0289Global Challenges Research Fund (GCRF), United Kingdom P202PF11Guangxi Key Laboratory of Trusted Software kx20190
On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator
Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
- …