14,726 research outputs found

    An ensemble deep learning based approach for red lesion detection in fundus images

    Get PDF
    Background and objectives: Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. Methods: In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. Results: We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. Conclusions: Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our system is publicly available at https://github.com/ignaciorlando/red-lesion-detection.Fil: Orlando, José Ignacio. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de Ciencias Exactas. Grupo de Plasmas Densos Magnetizados. Provincia de Buenos Aires. Gobernación. Comision de Investigaciones Científicas. Grupo de Plasmas Densos Magnetizados; ArgentinaFil: Prokofyeva, Elena. Scientific Institute of Public Health; BélgicaFil: del Fresno, Mirta Mariana. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de Ciencias Exactas. Grupo de Plasmas Densos Magnetizados. Provincia de Buenos Aires. Gobernación. Comision de Investigaciones Científicas. Grupo de Plasmas Densos Magnetizados; ArgentinaFil: Blaschko, Matthew Brian. ESAT Speech Group; Bélgic

    Lesion detection and Grading of Diabetic Retinopathy via Two-stages Deep Convolutional Neural Networks

    Full text link
    We propose an automatic diabetic retinopathy (DR) analysis algorithm based on two-stages deep convolutional neural networks (DCNN). Compared to existing DCNN-based DR detection methods, the proposed algorithm have the following advantages: (1) Our method can point out the location and type of lesions in the fundus images, as well as giving the severity grades of DR. Moreover, since retina lesions and DR severity appear with different scales in fundus images, the integration of both local and global networks learn more complete and specific features for DR analysis. (2) By introducing imbalanced weighting map, more attentions will be given to lesion patches for DR grading, which significantly improve the performance of the proposed algorithm. In this study, we label 12,206 lesion patches and re-annotate the DR grades of 23,595 fundus images from Kaggle competition dataset. Under the guidance of clinical ophthalmologists, the experimental results show that our local lesion detection net achieve comparable performance with trained human observers, and the proposed imbalanced weighted scheme also be proved to significantly improve the capability of our DCNN-based DR grading algorithm

    Bone marrow edema in sacroiliitis : detection with dual-energy CT

    No full text
    Objectives: To evaluate the feasibility and diagnostic accuracy of dual-energy computed tomography (DECT) for the detection of bone marrow edema (BME) in patients suspected for sacroiliitis. Methods: Patients aged 18-55 years with clinical suspicion for sacroiliitis were enrolled. All patients underwent DECT and 3.0 T MRI of the sacroiliac joints on the same day. Virtual non-calcium (VNCa) images were calculated from DECT images for demonstration of BME. VNCa images were scored by two readers independently using a binary system (0 = normal bone marrow, 1 = BME). Diagnostic performance was assessed with fluid-sensitive MRI as the reference standard. ROIs were placed on VNCa images, and CT numbers were displayed. Cutoff values for BME detection were determined based on ROC curves. Results: Forty patients (16 men, 24 women, mean age 37.1 years +/- 9.6 years) were included. Overall inter-reader agreement for visual image reading of BME on VNCa images was good (kappa = 0.70). The sensitivity and specificity of BME detection by DECT were 65.4% and 94.2% on the quadrant level and 81.3% and 91.7% on the patient level. ROC analyses revealed AUCs of 0.90 and 0.87 for CT numbers in the ilium and sacrum, respectively. Cutoff values of - 44.4 HU (for iliac quadrants) and - 40.8 HU (for sacral quadrants) yielded sensitivities of 76.9% and 76.7% and specificities of 91.5% and 87.5%, respectively. Conclusions: Inflammatory sacroiliac BME can be detected by VNCa images calculated from DECT, with a good interobserver agreement, moderate sensitivity, and high specificity
    corecore