19 research outputs found
Feature Representation Analysis of Deep Convolutional Neural Network using Two-stage Feature Transfer -An Application for Diffuse Lung Disease Classification-
Transfer learning is a machine learning technique designed to improve
generalization performance by using pre-trained parameters obtained from other
learning tasks. For image recognition tasks, many previous studies have
reported that, when transfer learning is applied to deep neural networks,
performance improves, despite having limited training data. This paper proposes
a two-stage feature transfer learning method focusing on the recognition of
textural medical images. During the proposed method, a model is successively
trained with massive amounts of natural images, some textural images, and the
target images. We applied this method to the classification task of textural
X-ray computed tomography images of diffuse lung diseases. In our experiment,
the two-stage feature transfer achieves the best performance compared to a
from-scratch learning and a conventional single-stage feature transfer. We also
investigated the robustness of the target dataset, based on size. Two-stage
feature transfer shows better robustness than the other two learning methods.
Moreover, we analyzed the feature representations obtained from DLDs imagery
inputs for each feature transfer models using a visualization method. We showed
that the two-stage feature transfer obtains both edge and textural features of
DLDs, which does not occur in conventional single-stage feature transfer
models.Comment: Preprint of the journal article to be published in IPSJ TOM-51.
Notice for the use of this material The copyright of this material is
retained by the Information Processing Society of Japan (IPSJ). This material
is published on this web site with the agreement of the author (s) and the
IPS
Feature Representation Analysis of Deep Convolutional Neural Network using Two-stage Feature Transfer―An Application for Diffuse Lung Disease Classification
Transfer learning is a machine learning technique designed to improve generalization performance by using pre-trained parameters obtained from other learning tasks. For image recognition tasks, many previous studies have reported that, when transfer learning is applied to deep neural networks, performance improves, despite having limited training data. This paper proposes a two-stage feature transfer learning method focusing on the recognition of textural medical images. During the proposed method, a model is successively trained with massive amounts of natural images, some textural images, and the target images. We applied this method to the classification task of textural X-ray computed tomography images of diffuse lung diseases. In our experiment, the two-stage feature transfer achieves the best performance compared to a from-scratch learning and a conventional single-stage feature transfer. We also investigated the robustness of the target dataset, based on size. Two-stage feature transfer shows better robustness than the other two learning methods. Moreover, we analyzed the feature representations obtained from DLDs imagery inputs for each feature transfer models using a visualization method. We showed that the two-stage feature transfer obtains both edge and textural features of DLDs, which does not occur in conventional single-stage feature transfer models
Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images
Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy