15 research outputs found

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort

    Deep learning algorithms to isolate and quantify the structures of the anterior segment in optical coherence tomography images

    Get PDF
    Background/Aims Accurate isolation and quantification of intraocular dimensions in the anterior segment (AS) of the eye using optical coherence tomography (OCT) images is important in the diagnosis and treatment of many eye diseases, especially angle-closure glaucoma. Method In this study, we developed a deep convolutional neural network (DCNN) for the localisation of the scleral spur; moreover, we introduced an information-rich segmentation approach for this localisation problem. An ensemble of DCNNs for the segmentation of AS structures (iris, corneosclera shell adn anterior chamber) was developed. Based on the results of two previous processes, an algorithm to automatically quantify clinically important measurements were created. 200 images from 58 patients (100 eyes) were used for testing. Results With limited training data, the DCNN was able to detect the scleral spur on unseen anterior segment optical coherence tomography (ASOCT) images as accurately as an experienced ophthalmologist on the given test dataset and simultaneously isolated the AS structures with a Dice coefficient of 95.7%. We then automatically extracted eight clinically relevant ASOCT measurements and proposed an automated quality check process that asserts the reliability of these measurements. When combined with an OCT machine capable of imaging multiple radial sections, the algorithms can provide a more complete objective assessment. The total segmentation and measurement time for a single scan is less than 2 s. Conclusion This is an essential step towards providing a robust automated framework for reliable quantification of ASOCT scans, for applications in the diagnosis and management of angle-closure glaucoma

    DeshadowGAN: a deep learning approach to remove shadows from optical coherence tomography images

    Get PDF
    Purpose: To remove blood vessel shadows from optical coherence tomography (OCT) images of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device for both eyes of 13 subjects. A custom generative adversarial network (named DeshadowGAN) was designed and trained with 2328 B-scans in order to remove blood vessel shadows in unseen B-scans. Image quality was assessed qualitatively (for artifacts) and quantitatively using the intralayer contrast—a measure of shadow visibility ranging from 0 (shadow-free) to 1 (strong shadow). This was computed in the retinal nerve fiber layer (RNFL), the inner plexiform layer (IPL), the photoreceptor (PR) layer, and the retinal pigment epithelium (RPE) layer. The performance of DeshadowGAN was also compared with that of compensation, the standard for shadow removal. Results: DeshadowGAN decreased the intralayer contrast in all tissue layers. On average, the intralayer contrast decreased by 33.7 ± 6.81%, 28.8 ± 10.4%, 35.9 ± 13.0%, and 43.0 ± 19.5% for the RNFL, IPL, PR layer, and RPE layer, respectively, indicating successful shadow removal across all depths. Output images were also free from artifacts commonly observed with compensation. Conclusions: DeshadowGAN significantly corrected blood vessel shadows in OCT images of the ONH. Our algorithm may be considered as a preprocessing step to improve the performance of a wide range of algorithms including those currently being used for OCT segmentation, denoising, and classification. Translational Relevance: DeshadowGAN could be integrated to existing OCT devices to improve the diagnosis and prognosis of ocular pathologies

    OCT-GAN: single step shadow and noise removal from optical coherence tomography images of the human optic nerve head

    Get PDF
    Speckle noise and retinal shadows within OCT B-scans occlude important edges, fine textures and deep tissues, preventing accurate and robust diagnosis by algorithms and clinicians. We developed a single process that successfully removed both noise and retinal shadows from unseen single-frame B-scans within 10.4ms. Mean average gradient magnitude (AGM) for the proposed algorithm was 57.2% higher than current state-of-the-art, while mean peak signal to noise ratio (PSNR), contrast to noise ratio (CNR), and structural similarity index metric (SSIM) increased by 11.1%, 154% and 187% respectively compared to single-frame B-scans. Mean intralayer contrast (ILC) improvement for the retinal nerve fiber layer (RNFL), photoreceptor layer (PR) and retinal pigment epithelium (RPE) layers decreased from 0.362 ± 0.133 to 0.142 ± 0.102, 0.449 ± 0.116 to 0.0904 ± 0.0769, 0.381 ± 0.100 to 0.0590 ± 0.0451 respectively. The proposed algorithm reduces the necessity for long image acquisition times, minimizes expensive hardware requirements and reduces motion artifacts in OCT images

    A Deep Learning Approach to Digitally Stain Optical Coherence Tomography Images of the Optic Nerve Head

    No full text
    International audiencePURPOSE. To develop a deep learning approach to digitally stain optical coherence tomography (OCT) images of the optic nerve head (ONH). METHODS. A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for one eye of each of 100 subjects (40 healthy and 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e., highlight) six tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the dice coefficient, sensitivity, specificity, intersection over union (IU), and accuracy. We studied the effect of compensation, number of training images, and performance comparison between glaucoma and healthy subjects.RESULTS. For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer þ prelamina, the RPE, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the dice coefficient, sensitivity, specificity, IU, and accuracy (mean) were 0.84 6 0.03, 0.92 6 0.03, 0.99 6 0.00, 0.89 6 0.03, and 0.94 6 0.02, respectively. Our algorithm performed significantly better when compensated images were used for training (P < 0.001). Besides offering a good reliability, digital staining also performed well on OCT images of both glaucoma and healthy individuals.CONCLUSIONS. Our deep learning algorithm can simultaneously stain the neural and connective tissues of the ONH, offering a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management
    corecore