88 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Pre-training autoencoder for lung nodule malignancy assessment using CT images
Lung cancer late diagnosis has a large impact on the mortality rate numbers, leading to a very low five-year survival rate of 5%. This issue emphasises the importance of developing systems to support a diagnostic at earlier stages. Clinicians use Computed Tomography (CT) scans to assess the nodules and the likelihood of malignancy. Automatic solutions can help to make a faster and more accurate diagnosis, which is crucial for the early detection of lung cancer. Convolutional neural networks (CNN) based approaches have shown to provide a reliable feature extraction ability to detect the malignancy risk associated with pulmonary nodules. This type of approach requires a massive amount of data to model training, which usually represents a limitation in the biomedical field due to medical data privacy and security issues. Transfer learning (TL) methods have been widely explored in medical imaging applications, offering a solution to overcome problems related to the lack of training data publicly available. For the clinical annotations experts with a deep understanding of the complex physiological phenomena represented in the data are required, which represents a huge investment. In this direction, this work explored a TL method based on unsupervised learning achieved when training a Convolutional Autoencoder (CAE) using images in the same domain. For this, lung nodules from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) were extracted and used to train a CAE. Then, the encoder part was transferred, and the malignancy risk was assessed in a binary classification—benign and malignant lung nodules, achieving an Area Under the Curve (AUC) value of 0.936. To evaluate the reliability of this TL approach, the same architecture was trained from scratch and achieved an AUC value of 0.928. The results reported in this comparison suggested that the feature learning achieved when reconstructing the input with an encoder-decoder based architecture can be considered an useful knowledge that might allow overcoming labelling constraints.This work is financed by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia within project UIDB/50014/2020
CT-LungNet: A Deep Learning Framework for Precise Lung Tissue Segmentation in 3D Thoracic CT Scans
Segmentation of lung tissue in computed tomography (CT) images is a precursor
to most pulmonary image analysis applications. Semantic segmentation methods
using deep learning have exhibited top-tier performance in recent years,
however designing accurate and robust segmentation models for lung tissue is
challenging due to the variations in shape, size, and orientation.
Additionally, medical image artifacts and noise can affect lung tissue
segmentation and degrade the accuracy of downstream analysis. The practicality
of current deep learning methods for lung tissue segmentation is limited as
they require significant computational resources and may not be easily
deployable in clinical settings. This paper presents a fully automatic method
that identifies the lungs in three-dimensional (3D) pulmonary CT images using
deep networks and transfer learning. We introduce (1) a novel 2.5-dimensional
image representation from consecutive CT slices that succinctly represents
volumetric information and (2) a U-Net architecture equipped with pre-trained
InceptionV3 blocks to segment 3D CT scans while maintaining the number of
learnable parameters as low as possible. Our method was quantitatively assessed
using one public dataset, LUNA16, for training and testing and two public
datasets, namely, VESSEL12 and CRPF, only for testing. Due to the low number of
learnable parameters, our method achieved high generalizability to the unseen
VESSEL12 and CRPF datasets while obtaining superior performance over Luna16
compared to existing methods (Dice coefficients of 99.7, 99.1, and 98.8 over
LUNA16, VESSEL12, and CRPF datasets, respectively). We made our method publicly
accessible via a graphical user interface at medvispy.ee.kntu.ac.ir
Semi-Supervised Segmentation of Radiation-Induced Pulmonary Fibrosis from Lung CT Scans with Multi-Scale Guided Dense Attention
Computed Tomography (CT) plays an important role in monitoring
radiation-induced Pulmonary Fibrosis (PF), where accurate segmentation of the
PF lesions is highly desired for diagnosis and treatment follow-up. However,
the task is challenged by ambiguous boundary, irregular shape, various position
and size of the lesions, as well as the difficulty in acquiring a large set of
annotated volumetric images for training. To overcome these problems, we
propose a novel convolutional neural network called PF-Net and incorporate it
into a semi-supervised learning framework based on Iterative Confidence-based
Refinement And Weighting of pseudo Labels (I-CRAWL). Our PF-Net combines 2D and
3D convolutions to deal with CT volumes with large inter-slice spacing, and
uses multi-scale guided dense attention to segment complex PF lesions. For
semi-supervised learning, our I-CRAWL employs pixel-level uncertainty-based
confidence-aware refinement to improve the accuracy of pseudo labels of
unannotated images, and uses image-level uncertainty for confidence-based image
weighting to suppress low-quality pseudo labels in an iterative training
process. Extensive experiments with CT scans of Rhesus Macaques with
radiation-induced PF showed that: 1) PF-Net achieved higher segmentation
accuracy than existing 2D, 3D and 2.5D neural networks, and 2) I-CRAWL
outperformed state-of-the-art semi-supervised learning methods for the PF
lesion segmentation task. Our method has a potential to improve the diagnosis
of PF and clinical assessment of side effects of radiotherapy for lung cancers.Comment: 12 pages, 9 figures. Submitted to IEEE TM
- …