2 research outputs found
Organ Segmentation From Full-size CT Images Using Memory-Efficient FCN
In this work, we present a memory-efficient fully convolutional network (FCN)
incorporated with several memory-optimized techniques to reduce the run-time
GPU memory demand during training phase. In medical image segmentation tasks,
subvolume cropping has become a common preprocessing. Subvolumes (or small
patch volumes) were cropped to reduce GPU memory demand. However, small patch
volumes capture less spatial context that leads to lower accuracy. As a pilot
study, the purpose of this work is to propose a memory-efficient FCN which
enables us to train the model on full size CT image directly without subvolume
cropping, while maintaining the segmentation accuracy. We optimize our network
from both architecture and implementation. With the development of computing
hardware, such as graphics processing unit (GPU) and tensor processing unit
(TPU), now deep learning applications is able to train networks with large
datasets within acceptable time. Among these applications, semantic
segmentation using fully convolutional network (FCN) also has gained a
significant improvement against traditional image processing approaches in both
computer vision and medical image processing fields. However, unlike general
color images used in computer vision tasks, medical images have larger scales
than color images such as 3D computed tomography (CT) images, micro CT images,
and histopathological images. For training these medical images, the large
demand of computing resource become a severe problem. In this paper, we present
a memory-efficient FCN to tackle the high GPU memory demand challenge in organ
segmentation problem from clinical CT images. The experimental results
demonstrated that our GPU memory demand is about 40% of baseline architecture,
parameter amount is about 30% of the baseline
Lung Infection Quantification of COVID-19 in CT Images with Deep Learning
CT imaging is crucial for diagnosis, assessment and staging COVID-19
infection. Follow-up scans every 3-5 days are often recommended for disease
progression. It has been reported that bilateral and peripheral ground glass
opacification (GGO) with or without consolidation are predominant CT findings
in COVID-19 patients. However, due to lack of computerized quantification
tools, only qualitative impression and rough description of infected areas are
currently used in radiological reports. In this paper, a deep learning
(DL)-based segmentation system is developed to automatically quantify infection
regions of interest (ROIs) and their volumetric ratios w.r.t. the lung. The
performance of the system was evaluated by comparing the automatically
segmented infection regions with the manually-delineated ones on 300 chest CT
scans of 300 COVID-19 patients. For fast manual delineation of training samples
and possible manual intervention of automatic results, a human-in-the-loop
(HITL) strategy has been adopted to assist radiologists for infection region
segmentation, which dramatically reduced the total segmentation time to 4
minutes after 3 iterations of model updating. The average Dice simiarility
coefficient showed 91.6% agreement between automatic and manual infaction
segmentations, and the mean estimation error of percentage of infection (POI)
was 0.3% for the whole lung. Finally, possible applications, including but not
limited to analysis of follow-up CT scans and infection distributions in the
lobes and segments correlated with clinical findings, were discussed.Comment: 23 pages, 6 figure