712 research outputs found
Anatomy-specific classification of medical images using deep convolutional nets
Automated classification of human anatomy is an important prerequisite for
many computer-aided diagnosis systems. The spatial complexity and variability
of anatomy throughout the human body makes classification difficult. "Deep
learning" methods such as convolutional networks (ConvNets) outperform other
state-of-the-art methods in image classification tasks. In this work, we
present a method for organ- or body-part-specific anatomical classification of
medical images acquired using computed tomography (CT) with ConvNets. We train
a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical
classes. Key-images were mined from a hospital PACS archive, using a set of
1,675 patients. We show that a data augmentation approach can help to enrich
the data set and improve classification performance. Using ConvNets and data
augmentation, we achieve anatomy-specific classification error of 5.9 % and
area-under-the-curve (AUC) values of an average of 0.998 in testing. We
demonstrate that deep learning can be used to train very reliable and accurate
classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical
Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US
Recommended from our members
A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.
Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
Quantitative Analysis of Radiation-Associated Parenchymal Lung Change
Radiation-induced lung damage (RILD) is a common consequence of thoracic radiotherapy (RT). We present here a novel classification of the parenchymal features of RILD. We developed a deep learning algorithm (DLA) to automate the delineation of 5 classes of parenchymal texture of increasing density.
200 scans were used to train and validate the network and the remaining 30 scans were used as a hold-out test set. The DLA automatically labelled the data with Dice Scores of 0.98, 0.43, 0.26, 0.47 and 0.92 for the 5 respective classes.
Qualitative evaluation showed that the automated labels were acceptable in over 80% of cases for all tissue classes, and achieved similar ratings to the manual labels. Lung registration was performed and the effect of radiation dose on each tissue class and correlation with respiratory outcomes was assessed. The change in volume of each tissue class over time generated by manual and automated segmentation was calculated. The 5 parenchymal classes showed distinct temporal patterns
We quantified the volumetric change in textures after radiotherapy and correlate these with radiotherapy dose and respiratory outcomes.
The effect of local dose on tissue class revealed a strong dose-dependent relationship
We have developed a novel classification of parenchymal changes associated with RILD that show a convincing dose relationship. The tissue classes are related to both global and local dose metrics, and have a distinct evolution over time. Although less strong, there is a relationship between the radiological texture changes we can measure and respiratory outcomes, particularly the MRC score which directly represents a patient’s functional status. We have demonstrated the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible
Next Generation Reporting and Diagnostic Tools for Healthcare and Biomedical Applications
Ph.DDOCTOR OF PHILOSOPH
A systematic review of natural language processing applied to radiology reports
NLP has a significant role in advancing healthcare and has been found to be
key in extracting structured information from radiology reports. Understanding
recent developments in NLP application to radiology is of significance but
recent reviews on this are limited. This study systematically assesses recent
literature in NLP applied to radiology reports. Our automated literature search
yields 4,799 results using automated filtering, metadata enriching steps and
citation search combined with manual review. Our analysis is based on 21
variables including radiology characteristics, NLP methodology, performance,
study, and clinical application characteristics. We present a comprehensive
analysis of the 164 publications retrieved with each categorised into one of 6
clinical application categories. Deep learning use increases but conventional
machine learning approaches are still prevalent. Deep learning remains
challenged when data is scarce and there is little evidence of adoption into
clinical practice. Despite 17% of studies reporting greater than 0.85 F1
scores, it is hard to comparatively evaluate these approaches given that most
of them use different datasets. Only 14 studies made their data and 15 their
code available with 10 externally validating results. Automated understanding
of clinical narratives of the radiology reports has the potential to enhance
the healthcare process but reproducibility and explainability of models are
important if the domain is to move applications into clinical use. More could
be done to share code enabling validation of methods on different institutional
data and to reduce heterogeneity in reporting of study properties allowing
inter-study comparisons. Our results have significance for researchers
providing a systematic synthesis of existing work to build on, identify gaps,
opportunities for collaboration and avoid duplication
- …