19 research outputs found
Deep learning segmentation of fibrous cap in intravascular optical coherence tomography images
Thin-cap fibroatheroma (TCFA) is a prominent risk factor for plaque rupture.
Intravascular optical coherence tomography (IVOCT) enables identification of
fibrous cap (FC), measurement of FC thicknesses, and assessment of plaque
vulnerability. We developed a fully-automated deep learning method for FC
segmentation. This study included 32,531 images across 227 pullbacks from two
registries. Images were semi-automatically labeled using our OCTOPUS with
expert editing using established guidelines. We employed preprocessing
including guidewire shadow detection, lumen segmentation, pixel-shifting, and
Gaussian filtering on raw IVOCT (r,theta) images. Data were augmented in a
natural way by changing theta in spiral acquisitions and by changing intensity
and noise values. We used a modified SegResNet and comparison networks to
segment FCs. We employed transfer learning from our existing much larger,
fully-labeled calcification IVOCT dataset to reduce deep-learning training.
Overall, our method consistently delivered better FC segmentation results
(Dice: 0.837+/-0.012) than other deep-learning methods. Transfer learning
reduced training time by 84% and reduced the need for more training samples.
Our method showed a high level of generalizability, evidenced by
highly-consistent segmentations across five-fold cross-validation (sensitivity:
85.0+/-0.3%, Dice: 0.846+/-0.011) and the held-out test (sensitivity: 84.9%,
Dice: 0.816) sets. In addition, we found excellent agreement of FC thickness
with ground truth (2.95+/-20.73 um), giving clinically insignificant bias.
There was excellent reproducibility in pre- and post-stenting pullbacks
(average FC angle: 200.9+/-128.0 deg / 202.0+/-121.1 deg). Our method will be
useful for multiple research purposes and potentially for planning stent
deployments that avoid placing a stent edge over an FC.Comment: 24 pages, 9 figures, 2 tables, 2 supplementary figures, 3
supplementary table
Automated classification of coronary plaque calcification in OCT pullbacks with 3D deep neural networks
Significance: Detection and characterization of coronary atherosclerotic plaques often need reviews of a large number of optical coherence tomography (OCT) imaging slices to make a clinical decision. However, it is a challenge to manually review all the slices and consider the interrelationship between adjacent slices.
Approach: Inspired by the recent success of deep convolutional network on the classification of medical images, we proposed a ResNet-3D network for classification of coronary plaque calcification in OCT pullbacks. The ResNet-3D network was initialized with a trained ResNet-50 network and a three-dimensional convolution filter filled with zeros padding and non-zeros padding with a convolutional filter. To retrain ResNet-50, we used a dataset of ∼4860 OCT images, derived by 18 entire pullbacks from different patients. In addition, we investigated a two-phase training method to address the data imbalance. For an improved performance, we evaluated different input sizes for the ResNet-3D network, such as 3, 5, and 7 OCT slices. Furthermore, we integrated all ResNet-3D results by majority voting.
Results: A comparative analysis proved the effectiveness of the proposed ResNet-3D networks against ResNet-2D network in the OCT dataset. The classification performance (F1-scores = 94 % for non-zeros padding and F1-score = 96 % for zeros padding) demonstrated the potential of convolutional neural networks (CNNs) in classifying plaque calcification.
Conclusions: This work may provide a foundation for further work in extending the CNN to voxel segmentation, which may lead to a supportive diagnostic tool for assessment of coronary plaque vulnerability
Automated analysis of fibrous cap in intravascular optical coherence tomography images of coronary arteries
Thin-cap fibroatheroma (TCFA) and plaque rupture have been recognized as the
most frequent risk factor for thrombosis and acute coronary syndrome.
Intravascular optical coherence tomography (IVOCT) can identify TCFA and assess
cap thickness, which provides an opportunity to assess plaque vulnerability. We
developed an automated method that can detect lipidous plaque and assess
fibrous cap thickness in IVOCT images. This study analyzed a total of 4,360
IVOCT image frames of 77 lesions among 41 patients. To improve segmentation
performance, preprocessing included lumen segmentation, pixel-shifting, and
noise filtering on the raw polar (r, theta) IVOCT images. We used the
DeepLab-v3 plus deep learning model to classify lipidous plaque pixels. After
lipid detection, we automatically detected the outer border of the fibrous cap
using a special dynamic programming algorithm and assessed the cap thickness.
Our method provided excellent discriminability of lipid plaque with a
sensitivity of 85.8% and A-line Dice coefficient of 0.837. By comparing lipid
angle measurements between two analysts following editing of our automated
software, we found good agreement by Bland-Altman analysis (difference 6.7+/-17
degree; mean 196 degree). Our method accurately detected the fibrous cap from
the detected lipid plaque. Automated analysis required a significant
modification for only 5.5% frames. Furthermore, our method showed a good
agreement of fibrous cap thickness between two analysts with Bland-Altman
analysis (4.2+/-14.6 micron; mean 175 micron), indicating little bias between
users and good reproducibility of the measurement. We developed a fully
automated method for fibrous cap quantification in IVOCT images, resulting in
good agreement with determinations by analysts. The method has great potential
to enable highly automated, repeatable, and comprehensive evaluations of TCFAs.Comment: 18 pages, 9 figure
Feasibility of Colon Cancer Detection in Confocal Laser Microscopy Images Using Convolution Neural Networks
Histological evaluation of tissue samples is a typical approach to identify
colorectal cancer metastases in the peritoneum. For immediate assessment,
reliable and real-time in-vivo imaging would be required. For example,
intraoperative confocal laser microscopy has been shown to be suitable for
distinguishing organs and also malignant and benign tissue. So far, the
analysis is done by human experts. We investigate the feasibility of automatic
colon cancer classification from confocal laser microscopy images using deep
learning models. We overcome very small dataset sizes through transfer learning
with state-of-the-art architectures. We achieve an accuracy of 89.1% for cancer
detection in the peritoneum which indicates viability as an intraoperative
decision support system.Comment: Accepted at BVM Workshop 201