9 research outputs found
Unsupervised Body Part Regression via Spatially Self-ordering Convolutional Neural Networks
Automatic body part recognition for CT slices can benefit various medical
image applications. Recent deep learning methods demonstrate promising
performance, with the requirement of large amounts of labeled images for
training. The intrinsic structural or superior-inferior slice ordering
information in CT volumes is not fully exploited. In this paper, we propose a
convolutional neural network (CNN) based Unsupervised Body part Regression
(UBR) algorithm to address this problem. A novel unsupervised learning method
and two inter-sample CNN loss functions are presented. Distinct from previous
work, UBR builds a coordinate system for the human body and outputs a
continuous score for each axial slice, representing the normalized position of
the body part in the slice. The training process of UBR resembles a
self-organization process: slice scores are learned from inter-slice
relationships. The training samples are unlabeled CT volumes that are abundant,
thus no extra annotation effort is needed. UBR is simple, fast, and accurate.
Quantitative and qualitative experiments validate its effectiveness. In
addition, we show two applications of UBR in network initialization and anomaly
detection.Comment: Oral presentation in ISBI1
Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise Transformation for 3D Medical Image Segmentation
Deep learning highly relies on the quantity of annotated data. However, the
annotations for 3D volumetric medical data require experienced physicians to
spend hours or even days for investigation. Self-supervised learning is a
potential solution to get rid of the strong requirement of training data by
deeply exploiting raw data information. In this paper, we propose a novel
self-supervised learning framework for volumetric medical images. Specifically,
we propose a context restoration task, i.e., Rubik's cube++, to pre-train 3D
neural networks. Different from the existing context-restoration-based
approaches, we adopt a volume-wise transformation for context permutation,
which encourages network to better exploit the inherent 3D anatomical
information of organs. Compared to the strategy of training from scratch,
fine-tuning from the Rubik's cube++ pre-trained weight can achieve better
performance in various tasks such as pancreas segmentation and brain tissue
segmentation. The experimental results show that our self-supervised learning
method can significantly improve the accuracy of 3D deep learning networks on
volumetric medical datasets without the use of extra data.Comment: Accepted by MICCAI 202
Deep Learning Body Region Classification of MRI and CT examinations
Standardized body region labelling of individual images provides data that
can improve human and computer use of medical images. A CNN-based classifier
was developed to identify body regions in CT and MRI. 17 CT (18 MRI) body
regions covering the entire human body were defined for the classification
task. Three retrospective databases were built for the AI model training,
validation, and testing, with a balanced distribution of studies per body
region. The test databases originated from a different healthcare network.
Accuracy, recall and precision of the classifier was evaluated for patient age,
patient gender, institution, scanner manufacturer, contrast, slice thickness,
MRI sequence, and CT kernel. The data included a retrospective cohort of 2,934
anonymized CT cases (training: 1,804 studies, validation: 602 studies, test:
528 studies) and 3,185 anonymized MRI cases (training: 1,911 studies,
validation: 636 studies, test: 638 studies). 27 institutions from primary care
hospitals, community hospitals and imaging centers contributed to the test
datasets. The data included cases of all genders in equal proportions and
subjects aged from a few months old to +90 years old. An image-level prediction
accuracy of 91.9% (90.2 - 92.1) for CT, and 94.2% (92.0 - 95.6) for MRI was
achieved. The classification results were robust across all body regions and
confounding factors. Due to limited data, performance results for subjects
under 10 years-old could not be reliably evaluated. We show that deep learning
models can classify CT and MRI images by body region including lower and upper
extremities with high accuracy.Comment: 21 pages, 2 figures, 4 table
Self-supervised learning methods for label-efficient dental caries classification
High annotation costs are a substantial bottleneck in applying deep learning architectures to clinically relevant use cases, substantiating the need for algorithms to learn from unlabeled data. In this work, we propose employing self-supervised methods. To that end, we trained with three selfsupervised algorithms on a large corpus of unlabeled dental images, which contained 38K bitewing radiographs (BWRs). We then applied the learned neural network representations on tooth-level dental caries classification, for which we utilized labels extracted from electronic health records (EHRs). Finally, a holdout test-set was established, which consisted of 343 BWRs and was annotated by three dental professionals and approved by a senior dentist. This test-set was used to evaluate the fine-tuned caries classification models. Our experimental results demonstrate the obtained gains by pretraining models using self-supervised algorithms. These include improved caries classification performance (6 p.p. increase in sensitivity) and, most importantly, improved label-efficiency. In other words, the resulting models can be fine-tuned using few labels (annotations). Our results show that using as few as 18 annotations can produce ě45% sensitivity, which is comparable to human-level diagnostic performance. This study shows that self-supervision can provide gains in medical image analysis, particularly when obtaining labels is costly and expensive