8 research outputs found

    Automated diagnosing primary open-angle glaucoma from fundus image by simulating human\u27s grading with deep learning

    Get PDF
    Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet

    Deep Learning for Differentiating Benign From Malignant Parotid Lesions on MR Images

    Get PDF
    Purpose/Objectives(s)Salivary gland tumors are a rare, histologically heterogeneous group of tumors. The distinction between malignant and benign tumors of the parotid gland is clinically important. This study aims to develop and evaluate a deep-learning network for diagnosing parotid gland tumors via the deep learning of MR images.Materials/MethodsTwo hundred thirty-three patients with parotid gland tumors were enrolled in this study. Histology results were available for all tumors. All patients underwent MRI scans, including T1-weighted, CE-T1-weighted and T2-weighted imaging series. The parotid glands and tumors were segmented on all three MR image series by a radiologist with 10 years of clinical experience. A total of 3791 parotid gland region images were cropped from the MR images. A label (pleomorphic adenoma and Warthin tumor, malignant tumor or free of tumor), which was based on histology results, was assigned to each image. To train the deep-learning model, these data were randomly divided into a training dataset (90%, comprising 3035 MR images from 212 patients: 714 pleomorphic adenoma images, 558 Warthin tumor images, 861 malignant tumor images, and 902 images free of tumor) and a validation dataset (10%, comprising 275 images from 21 patients: 57 pleomorphic adenoma images, 36 Warthin tumor images, 93 malignant tumor images, and 89 images free of tumor). A modified ResNet model was developed to classify these images. The input images were resized to 224x224 pixels, including four channels (T1-weighted tumor images only, T2-weighted tumor images only, CE-T1-weighted tumor images only and parotid gland images). Random image flipping and contrast adjustment were used for data enhancement. The model was trained for 1200 epochs with a learning rate of 1e-6, and the Adam optimizer was implemented. It took approximately 2 hours to complete the whole training procedure. The whole program was developed with PyTorch (version 1.2).ResultsThe model accuracy with the training dataset was 92.94% (95% CI [0.91, 0.93]). The micro-AUC was 0.98. The experimental results showed that the accuracy of the final algorithm in the diagnosis and staging of parotid cancer was 82.18% (95% CI [0.77, 0.86]). The micro-AUC was 0.93.ConclusionThe proposed model may be used to assist clinicians in the diagnosis of parotid tumors. However, future larger-scale multicenter studies are required for full validation

    Spoilage Detection in Raspberry Fruit Based on Spectral Imaging Using Convolutional Neural Networks

    Get PDF
    Effective spoilage detection of perishable food items like fruits and vegetables is essential for retailers who stock and sell large quantities of these items. This research is aimed at developing a non-destructive, rapid and accurate method which is based on Spectral Imaging (SI) used in tandem with Convolutional Neural Network (CNN) to predict whether the fruit is fresh or rotten. The study also aims to determine the number of days before which the fruit rots. This research employs a primary, quantitative and inductive methods to investigate the Deep Learning based approach to detect fruit spoilage. Raspberry fruit in particular has been chosen for the experiment. Baskets of raspberries from three different stores were bought and stored in the refrigerator at four-degree Celsius. Images of these baskets was captured on a daily basis using an RGB digital camera until all the baskets of fruits were rotten. The study employs a Supervised learning-based classification approach where-by the data is labelled based on the physical appearance of fruits in the basket. The results show that a Spectral imaging technique used along with a CNN yields a good accuracy of 86% with the F1 score of 0.82 to classify the fruits as Good or Bad but does not fare well in estimating the number of days before the fruit actually rots. The ability of CNN to process and identify patterns in a SI to detect spoilage in fruits would help fruit retail operators to optimize their business chain
    corecore