5 research outputs found
Hierarchical Classification of Pulmonary Lesions: A Large-Scale Radio-Pathomics Study
Diagnosis of pulmonary lesions from computed tomography (CT) is important but
challenging for clinical decision making in lung cancer related diseases. Deep
learning has achieved great success in computer aided diagnosis (CADx) area for
lung cancer, whereas it suffers from label ambiguity due to the difficulty in
the radiological diagnosis. Considering that invasive pathological analysis
serves as the clinical golden standard of lung cancer diagnosis, in this study,
we solve the label ambiguity issue via a large-scale radio-pathomics dataset
containing 5,134 radiological CT images with pathologically confirmed labels,
including cancers (e.g., invasive/non-invasive adenocarcinoma, squamous
carcinoma) and non-cancer diseases (e.g., tuberculosis, hamartoma). This
retrospective dataset, named Pulmonary-RadPath, enables development and
validation of accurate deep learning systems to predict invasive pathological
labels with a non-invasive procedure, i.e., radiological CT scans. A
three-level hierarchical classification system for pulmonary lesions is
developed, which covers most diseases in cancer-related diagnosis. We explore
several techniques for hierarchical classification on this dataset, and propose
a Leaky Dense Hierarchy approach with proven effectiveness in experiments. Our
study significantly outperforms prior arts in terms of data scales (6x larger),
disease comprehensiveness and hierarchies. The promising results suggest the
potentials to facilitate precision medicine.Comment: MICCAI 2020 (Early Accepted
Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions
Significance: Two-dimensional (2-D) fully convolutional neural networks have been shown
capable of producing maps of sO2 from 2-D simulated images of simple tissue models.
However, their potential to produce accurate estimates in vivo is uncertain as they are limited
by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D),
and they have not been tested with realistic images.
Aim: To demonstrate the capability of deep neural networks to process whole 3-D images and
output 3-D maps of vascular sO2 from realistic tissue models/images.
Approach: Two separate fully convolutional neural networks were trained to produce 3-D maps
of vascular blood oxygen saturation and vessel positions from multiwavelength simulated
images of tissue models.
Results: The mean of the absolute difference between the true mean vessel sO2 and the network
output for 40 examples was 4.4% and the standard deviation was 4.5%.
Conclusions: 3-D fully convolutional networks were shown capable of producing accurate sO2
maps using the full extent of spatial information contained within 3-D images generated under
conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some
of the confounding effects present in real images such as limited-view artifacts and have the
potential to produce accurate estimates in vivo