13 research outputs found

    Fully automatic evaluation of the corneal endothelium from in vivo confocal microscopy

    No full text
    Background Manual and semi-automatic analyses of images, acquired in vivo by confocal microscopy, are often used to determine the quality of corneal endothelium in the human eye. These procedures are highly time consuming. Here, we present two fully automatic methods to analyze and quantify corneal endothelium imaged by in vivo white light slit-scanning confocal microscopy. Methods In the first approach, endothelial cell density is estimated with the help of spatial frequency analysis. We evaluate published methods, and propose a new, parameter-free method. In the second approach, based on the stochastic watershed, cells are automatically segmented and the result is used to estimate cell density, polymegathism (cell size variability) and pleomorphism (cell shape variation). We show how to determine optimal values for the three parameters of this algorithm, and compare its results to a semi-automatic delineation by a trained observer. Results The frequency analysis method proposed here is more precise than any published method. The segmentation method outperforms the fully automatic method in the NAVIS software (Nidek Technologies Srl, Padova, Italy), which significantly overestimates the number of cells for cell densities below approximately 1200 mm?2, as well as previously publishedmethods. Conclusions The methods presented here provide a significant improvement over the state of the art, and make in vivo, automated assessment of corneal endothelium more accessible. The segmentation method proposed paves the way to many possible new morphometric parameters, which can quickly and precisely be determined from the segmented image.ImPhys/Imaging PhysicsApplied Science

    Loosely coupled level sets for retinal layers and drusen segmentation in subjects with dry age-related macular degeneration

    No full text
    Optical coherence tomography (OCT) is used to produce high-resolution three-dimensional images of the retina, which permit the investigation of retinal irregularities. In dry age-related macular degeneration (AMD), a chronic eye disease that causes central vision loss, disruptions such as drusen and changes in retinal layer thicknesses occur which could be used as biomarkers for disease monitoring and diagnosis. Due to the topology disrupting pathology, existing segmentation methods often fail. Here, we present a solution for the segmentation of retinal layers in dry AMD subjects by extending our previously presented loosely coupled level sets framework which operates on attenuation coefficients. In eyes affected by AMD, Bruch’s membrane becomes visible only below the drusen and our segmentation framework is adapted to delineate such a partially discernible interface. Furthermore, the initialization stage, which tentatively segments five interfaces, is modified to accommodate the appearance of drusen. This stage is based on Dijkstra's algorithm and combines prior knowledge on the shape of the interface, gradient and attenuation coefficient in the newly proposed cost function. This prior knowledge is incorporated by varying the weights for horizontal, diagonal and vertical edges. Finally, quantitative evaluation of the accuracy shows a good agreement between manual and automated segmentation.ImPhys/Quantitative Imagin

    Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    No full text
    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.ImPhys/Quantitative Imagin

    Automatic detection of the region of interest in corneal endothelium images using dense convolutional neural networks

    No full text
    In images of the corneal endothelium (CE) acquired by specular microscopy, endothelial cells are commonly only visible in a part of the image due to varying contrast, mainly caused by challenging imaging conditions as a result of a strongly curved endothelium. In order to estimate the morphometric parameters of the corneal endothelium, the analyses need to be restricted to trustworthy regions - the region of interest (ROI) - where individual cells are discernible. We developed an automatic method to find the ROI by Dense U-nets, a densely connected network of convolutional layers. We tested the method on a heterogeneous dataset of 140 images, which contains a large number of blurred, noisy, and/or out of focus images, where the selection of the ROI for automatic biomarker extraction is vital. By using edge images as input, which can be estimated after retraining the same network, Dense U-net detected the trustworthy areas with an accuracy of 98.94% and an area under the ROC curve (AUC) of 0.998, without being affected by the class imbalance (9:1 in our dataset). After applying the estimated ROI to the edge images, the mean absolute percentage error (MAPE) in the estimated endothelial parameters was 0.80% for ECD, 3.60% for CV, and 2.55% for HEX.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.ImPhys/Quantitative Imagin

    Convolutional neural network-based regression for biomarker estimation in corneal endothelium microscopy images

    No full text
    The morphometric parameters of the corneal endothelium – cell density (ECD), cell size variation (CV), and hexagonality (HEX) – provide clinically relevant information about the cornea. To estimate these parameters, the endothelium is commonly imaged with a non-contact specular microscope and cell segmentation is performed to these images. In previous work, we have developed several methods that, combined, can perform an automated estimation of the parameters: the inference of the cell edges, the detection of the region of interest (ROI), a post-processing method that combines both images (edges and ROI), and a refinement method that removes false edges. In this work, we first explore the possibility of using a CNN-based regressor to directly infer the parameters from the edge images, simplifying the framework. We use a dataset of 738 images coming from a study related to the implantation of a Baerveldt glaucoma device and a standard clinical care regarding DSAEK corneal transplantation, both from the Rotterdam Eye Hospital and both containing images of unhealthy endotheliums. This large dataset allows us to build a large training set that makes this approach feasible. We achieved a mean absolute percentage error (MAPE) of 4.32% for ECD, 7.07% for CV, and 11.74% for HEX. These results, while promising, do not outperform our previous work. In a second experiment, we explore the use of the CNN-based regressor to improve the post-processing method of our previous approach in order to adapt it to the specifics of each image. Our results showed no clear benefit and proved that our previous post-processing is already highly reliable and robust.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.ImPhys/Quantitative ImagingImPhys/Computational Imagin

    Accurate estimation of the attenuation coefficient from axial point spread function corrected OCT scans of a single layer phantom

    No full text
    The attenuation coefficient (AC) is a property related to the microstructure of tissue on a wavelength scale that can be estimated from optical coherence tomography (OCT) data. Since the OCT signal sensitivity is affected by the finite spectrometer/detector resolution called roll-off and the shape of the focused beam in the sample arm, ignoring these effects leads to severely biased estimates of AC. Previously, the signal intensity dependence on these factors has been modeled. In this paper, we study the dependence of the estimated AC on the beam-shape and focus depth experimentally. A method is presented to estimate the axial point spread function model parameters by fitting the OCT signal model for single scattered light to the averaged A-lines of multiple B-scans obtained from a homogeneous single-layer phantom. The estimated model parameters were used to compensate the signal for the axial point spread function and roll-off in order to obtain an accurate estimate of AC. The result shows a significant improvement in the accuracy of the estimation of AC after correcting for the shape of the OCT beam.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.ImPhys/Quantitative Imaging(OLD) MSE-

    Improved Accuracy and Robustness of a Corneal Endothelial Cell Segmentation Method Based on Merging Superpixels

    No full text
    Clinical parameters related to the corneal endothelium can only be estimated by segmenting endothelial cell images. Specular microscopy is the current standard technique to image the endothelium, but its low SNR make the segmentation a complicated task. Recently, we proposed a method to segment such images by starting with an oversegmented image and merging the superpixels that constitute a cell. Here, we show how our merging method provides better results than optimizing the segmentation itself. Furthermore, our method can provide accurate results despite the degree of the initial oversegmentation, resulting into a precision and recall of 0.91 for the optimal oversegmentation.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.ImPhys/Quantitative ImagingImPhys/Computational Imagin

    Detection of retinal changes from illumination normalized fundus images using convolutional neural networks

    No full text
    Automated detection and quantification of spatio-temporal retinal changes is an important step to objectively assess disease progression and treatment effects for dynamic retinal diseases such as diabetic retinopathy (DR). However, detecting retinal changes caused by early DR lesions such as microaneurysms and dot hemorrhages from longitudinal pairs of fundus images is challenging due to intra and inter-image illumination variation between fundus images. This paper explores a method for automated detection of retinal changes from illumination normalized fundus images using a deep convolutional neural network (CNN), and compares its performance with two other CNNs trained separately on color and green channel fundus images. Illumination variation was addressed by correcting for the variability in the luminosity and contrast estimated from a large scale retinal regions. The CNN models were trained and evaluated on image patches extracted from a registered fundus image set collected from 51 diabetic eyes that were screened at two different time-points. The results show that using normalized images yield better performance than color and green channel images, suggesting that illumination normalization greatly facilitates CNNs to quickly and correctly learn distinctive local image features of DR related retinal changes.ImPhys/Quantitative Imagin

    Deep learning for assessing the corneal endothelium from specular microscopy images up to 1 year after ultrathin-dsaek surgery

    No full text
    Purpose: To present a fully automatic method to estimate the corneal endothelium parameters from specular microscopy images and to use it to study a one-year follow-up after ultrathin Descemet stripping automated endothelial keratoplasty. Methods: We analyzed 383 post ultrathin Descemet stripping automated endothelial keratoplasty images from 41 eyes acquired with a Topcon SP-1P specular microscope at 1, 3, 6, and 12 months after surgery. The estimated parameters were endothelial cell density (ECD), coefficient of variation (CV), and hexagonality (HEX). Manual segmentation was performed in all images. Results: Our method provided an estimate for ECD, CV, and HEX in 98.4% of the images, whereas Topcon’s software had a success rate of 71.5% for ECD/CV and 30.5% for HEX. For the images with estimates, the percentage error in our method was 2.5% for ECD, 5.7% for CV, and 5.7% for HEX, whereas Topcon’s software provided an error of 7.5% for ECD, 17.5% for CV, and 18.3% for HEX. Our method was significantly better than Topcon’s (P < 0.0001) and was not statistically significantly different from the manual assessments (P > 0.05). At month 12, the subjects presented an average ECD = 1377 ± 483 [cells/mm2 ], CV = 26.1 ± 5.7 [%], and HEX = 58.1 ± 7.1 [%]. Conclusions: The proposed method obtains reliable and accurate estimations even in challenging specular images of pathologic corneas. Translational Relevance: CV and HEX, not currently used in the clinic owing to a lack of reliability in automatic methods, are useful biomarkers to analyze the postoperative healing process. Our accurate estimations allow now for their clinical use.ImPhys/Computational ImagingApplied Science

    An Automated System for the Detection and Classification of Retinal Changes Due to Red Lesions in Longitudinal Fundus Images

    No full text
    People with diabetes mellitus need annual screening to check for the development of diabetic retinopathy. Tracking small retinal changes due to early diabetic retinopathy lesions in longitudinal fundus image sets is challenging due to intra- and inter-visit variability in illumination and image quality, the required high registration accuracy, and the subtle appearance of retinal lesions compared to other retinal features. This paper presents a robust and flexible approach for automated detection of longitudinal retinal changes due to small red lesions by exploiting normalized fundus images that significantly reduce illumination variations and improve the contrast of small retinal features. To detect spatio-temporal retinal changes, the absolute difference between the extremes of the multiscale blobness responses of fundus images from two time-points is proposed as a simple and effective blobness measure. DR related changes are then identified based on several intensity and shape features by a support vector machine classifier. The proposed approach was evaluated in the context of a regular diabetic retinopathy screening program involving subjects ranging from healthy (no retinal lesion) to moderate (with clinically relevant retinal lesions) DR levels. Evaluation shows that the system is able to detect retinal changes due to small red lesions with a sensitivity of 80% at an average false positive rate of 1 and 2.5 lesions per eye on small and large fields-of-view of the retina, respectively.ImPhys/Quantitative Imagin
    corecore