4 research outputs found
Microaneurysm detection using fully convolutional neural networks
Backround and Objectives: Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. Methods: A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors’ knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. Results: The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is
particularly important for screening purposes. Conclusions: Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications
Spatial distribution of early red lesions is a risk factor for development of vision-threatening diabetic retinopathy
Aims/hypothesis
Diabetic retinopathy is characterised by morphological lesions related to disturbances in retinal blood flow. It has previously been shown that the early development of retinal lesions temporal to the fovea may predict the development of treatment-requiring diabetic maculopathy. The aim of this study was to map accurately the area where lesions could predict progression to vision-threatening retinopathy.
Methods
The predictive value of the location of the earliest red lesions representing haemorrhages and/or microaneurysms was studied by comparing their occurrence in a group of individuals later developing vision-threatening diabetic retinopathy with that in a group matched with respect to diabetes type, age, sex and age of onset of diabetes mellitus who did not develop vision-threatening diabetic retinopathy during a similar observation period.
Results
The probability of progression to vision-threatening diabetic retinopathy was higher in a circular area temporal to the fovea, and the occurrence of the first lesions in this area was predictive of the development of vision-threatening diabetic retinopathy. The calculated peak value showed that the risk of progression was 39.5% higher than the average. There was no significant difference in the early distribution of lesions in participants later developing diabetic maculopathy or proliferative diabetic retinopathy.
Conclusions/interpretation
The location of early red lesions in diabetic retinopathy is predictive of whether or not individuals will later develop vision-threatening diabetic retinopathy. This evidence should be incorporated into risk models used to recommend control intervals in screening programmes for diabetic retinopathy
Recommended from our members
Comparing bone shape models from deep learning processing of magnetic resonance imaging to computed tomography-based models.
BACKGROUND: The purpose of this study was to develop a deep learning approach to automatically segment the scapular bone on magnetic resonance imaging (MRI) images and to compare the accuracy of these three-dimensional (3D) models with that of 3D computed tomography (CT). METHODS: Fifty-five patients with high-resolution 3D fat-saturated T2 MRI were retrospectively identified. The underlying pathology included rotator cuff tendinopathy and tears, shoulder instability, and impingement. Two experienced musculoskeletal researchers manually segmented the scapular bone. Five cross-validation training and validation splits were generated to independently train two-dimensional (2D) and 3D models using a convolutional neural network approach. Model performance was evaluated using the Dice similarity coefficient (DSC). All models with DSC > 0.70 were ensembled and used for the test set, which consisted of four patients with matching high-resolution MRI and CT scans. Clinically relevant glenoid measurements, including glenoid height, width, and retroversion, were calculated for two of the patients. Paired t-tests and Wilcoxon signed-rank tests were used to compare the DSC of the models. RESULTS: The 2D and 3D models achieved a best DSC of 0.86 and 0.82, respectively, with no significant difference observed. Augmentation of imaging data significantly improved 3D but not 2D model performance. In comparing clinical measurements of 3D MRI and CT, there was a mean difference ranging from 1.29 mm to 3.46 mm and 0.05° to 7.47°. CONCLUSION: We have presented a fully automatic, deep learning-based strategy for extracting scapular shape from a high-resolution MRI scan. Further developments of this technology have the potential to allow for surgeons to obtain all clinically relevant information from MRI scans and reduce the need for multiple imaging studies for patients with shoulder pathology