4,276 research outputs found
Improving the resolution of retinal OCT with deep learning
In medical imaging, high-resolution can be crucial for identifying pathologies and subtle changes in tissue structure. However, in many scenarios, achieving high image resolution can be limited by physics or available technology. In this paper, we aim to develop an automatic and fast approach to increasing the resolution of Optical Coherence Tomography (OCT) images using the data available, without any additional information or repeated scans. We adapt a fully connected deep learning network for the super-resolution task, allowing multi-scale similarity to be considered, and create a training and testing set of more than 40,000 sample patches from retinal OCT data. Testing our model, we achieve an impressive root mean squared error of 5.847 and peak signal-to-noise ratio (PSNR) of 33.28 dB averaged over 8282 samples. This represents a mean improvement in PSNR of 3.2 dB over nearest neighbour and 1.4 dB over bilinear interpolation. The results achieved so far improve over commonly used fast techniques for increasing resolution and are very encouraging for further development towards fast OCT super-resolution. The ability to increase quickly the resolution of OCT as well as other medical images has the potential to impact significantly on medical imaging at point of care, allowing significant small details to be revealed efficiently and accurately for inspection by clinicians and graders and facilitating earlier and more accurate diagnosis of disease
Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation
Optical coherence tomography (OCT) has become the most important imaging
modality in ophthalmology. A substantial amount of research has recently been
devoted to the development of machine learning (ML) models for the
identification and quantification of pathological features in OCT images. Among
the several sources of variability the ML models have to deal with, a major
factor is the acquisition device, which can limit the ML model's
generalizability. In this paper, we propose to reduce the image variability
across different OCT devices (Spectralis and Cirrus) by using CycleGAN, an
unsupervised unpaired image transformation algorithm. The usefulness of this
approach is evaluated in the setting of retinal fluid segmentation, namely
intraretinal cystoid fluid (IRC) and subretinal fluid (SRF). First, we train a
segmentation model on images acquired with a source OCT device. Then we
evaluate the model on (1) source, (2) target and (3) transformed versions of
the target OCT images. The presented transformation strategy shows an F1 score
of 0.4 (0.51) for IRC (SRF) segmentations. Compared with traditional
transformation approaches, this means an F1 score gain of 0.2 (0.12).Comment: * Contributed equally (order was defined by flipping a coin)
--------------- Accepted for publication in the "IEEE International Symposium
on Biomedical Imaging (ISBI) 2019
A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head
Purpose: To develop a deep learning approach to de-noise optical coherence
tomography (OCT) B-scans of the optic nerve head (ONH).
Methods: Volume scans consisting of 97 horizontal B-scans were acquired
through the center of the ONH using a commercial OCT device (Spectralis) for
both eyes of 20 subjects. For each eye, single-frame (without signal
averaging), and multi-frame (75x signal averaging) volume scans were obtained.
A custom deep learning network was then designed and trained with 2,328 "clean
B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean
B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance
of the de-noising algorithm was assessed qualitatively, and quantitatively on
1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio
(CNR), and mean structural similarity index metrics (MSSIM).
Results: The proposed algorithm successfully denoised unseen single-frame OCT
B-scans. The denoised B-scans were qualitatively similar to their corresponding
multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR
increased from dB (single-frame) to dB
(denoised). For all the ONH tissues, the mean CNR increased from (single-frame) to (denoised). The MSSIM increased from
(single frame) to (denoised) when compared with
the corresponding multi-frame B-scans.
Conclusions: Our deep learning algorithm can denoise a single-frame OCT
B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior
quality OCT B-scans with reduced scanning times and minimal patient discomfort
Open Source Software for Automatic Detection of Cone Photoreceptors in Adaptive Optics Ophthalmoscopy Using Convolutional Neural Networks
Imaging with an adaptive optics scanning light ophthalmoscope (AOSLO) enables direct visualization of the cone photoreceptor mosaic in the living human retina. Quantitative analysis of AOSLO images typically requires manual grading, which is time consuming, and subjective; thus, automated algorithms are highly desirable. Previously developed automated methods are often reliant on ad hoc rules that may not be transferable between different imaging modalities or retinal locations. In this work, we present a convolutional neural network (CNN) based method for cone detection that learns features of interest directly from training data. This cone-identifying algorithm was trained and validated on separate data sets of confocal and split detector AOSLO images with results showing performance that closely mimics the gold standard manual process. Further, without any need for algorithmic modifications for a specific AOSLO imaging system, our fully-automated multi-modality CNN-based cone detection method resulted in comparable results to previous automatic cone segmentation methods which utilized ad hoc rules for different applications. We have made free open-source software for the proposed method and the corresponding training and testing datasets available online
Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning
Diabetic eye disease is one of the fastest growing causes of preventable
blindness. With the advent of anti-VEGF (vascular endothelial growth factor)
therapies, it has become increasingly important to detect center-involved
diabetic macular edema (ci-DME). However, center-involved diabetic macular
edema is diagnosed using optical coherence tomography (OCT), which is not
generally available at screening sites because of cost and workflow
constraints. Instead, screening programs rely on the detection of hard exudates
in color fundus photographs as a proxy for DME, often resulting in high false
positive or false negative calls. To improve the accuracy of DME screening, we
trained a deep learning model to use color fundus photographs to predict
ci-DME. Our model had an ROC-AUC of 0.89 (95% CI: 0.87-0.91), which corresponds
to a sensitivity of 85% at a specificity of 80%. In comparison, three retinal
specialists had similar sensitivities (82-85%), but only half the specificity
(45-50%, p<0.001 for each comparison with model). The positive predictive value
(PPV) of the model was 61% (95% CI: 56-66%), approximately double the 36-38% by
the retinal specialists. In addition to predicting ci-DME, our model was able
to detect the presence of intraretinal fluid with an AUC of 0.81 (95% CI:
0.81-0.86) and subretinal fluid with an AUC of 0.88 (95% CI: 0.85-0.91). The
ability of deep learning algorithms to make clinically relevant predictions
that generally require sophisticated 3D-imaging equipment from simple 2D images
has broad relevance to many other applications in medical imaging
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
- …