11 research outputs found

    A Comparison of Deep Learning Techniques for Glaucoma Diagnosis on Retinal Fundus Images

    Get PDF
    Glaucoma is one of the serious disorders which cause permanent vision loss if it left undetected. The primary cause of the disease is elevated intraocular pressure, impacting the optic nerve head (ONH) that originates from the optic disc. The variation in optic disc to optic cup ratio helps in early detection of the disease. Manual calculation of Cup to Disc Ratio (CDR) consumes more time and the prediction is also not accurate. Utilizing deep learning for the automatic detection of glaucoma facilitates precise and early identification, significantly enhancing the accuracy of glaucoma detection. The deep learning technique initiates the process by initially pre-processing the image to achieve data augmentation, followed by the segmentation of the optic disc and optic cup from the retinal fundus image. From the segmented Optic Disc (OD)and Optic Cup (OC) feature are selected and CDR calculated. Based on the CDR value the Glaucoma classification is performed. Various deep learning techniques like CNN, transfer learning, algorithm was proposed in early detection of glaucoma. From the comparative analysis glaucoma diagnosis, the proposed deep learning artifact Convolutional Neural Network outperform in early diagnosis of glaucoma providing accuracy of 99.3 8%

    Automated diagnosing primary open-angle glaucoma from fundus image by simulating human\u27s grading with deep learning

    Get PDF
    Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet

    Feasibility of atrial fibrillation detection from a novel wearable armband device

    Get PDF
    BACKGROUND: Atrial fibrillation (AF) is the world’s most common heart rhythm disorder and even several minutes of AF episodes can contribute to risk for complications, including stroke. However, AF often goes undiagnosed owing to the fact that it can be paroxysmal, brief, and asymptomatic. OBJECTIVE: To facilitate better AF monitoring, we studied the feasibility of AF detection using a continuous electrocardiogram (ECG) signal recorded from a novel wearable armband device. METHODS: In our 2-step algorithm, we first calculate the R-R interval variability–based features to capture randomness that can indicate a segment of data possibly containing AF, and subsequently discriminate normal sinus rhythm from the possible AF episodes. Next, we use density Poincaré plot-derived image domain features along with a support vector machine to separate premature atrial/ventricular contraction episodes from any AF episodes. We trained and validated our model using the ECG data obtained from a subset of the MIMIC-III (Medical Information Mart for Intensive Care III) database containing 30 subjects. RESULTS: When we tested our model using the novel wearable armband ECG dataset containing 12 subjects, the proposed method achieved sensitivity, specificity, accuracy, and F1 score of 99.89%, 99.99%, 99.98%, and 0.9989, respectively. Moreover, when compared with several existing methods with the armband data, our proposed method outperformed the others, which shows its efficacy. CONCLUSION: Our study suggests that the novel wearable armband device and our algorithm can be used as a potential tool for continuous AF monitoring with high accuracy

    Medinoid : computer-aided diagnosis and localization of glaucoma using deep learning

    Get PDF
    Glaucoma is a leading eye disease, causing vision loss by gradually affecting peripheral vision if left untreated. Current diagnosis of glaucoma is performed by ophthalmologists, human experts who typically need to analyze different types of medical images generated by different types of medical equipment: fundus, Retinal Nerve Fiber Layer (RNFL), Optical Coherence Tomography (OCT) disc, OCT macula, perimetry, and/or perimetry deviation. Capturing and analyzing these medical images is labor intensive and time consuming. In this paper, we present a novel approach for glaucoma diagnosis and localization, only relying on fundus images that are analyzed by making use of state-of-the-art deep learning techniques. Specifically, our approach towards glaucoma diagnosis and localization leverages Convolutional Neural Networks (CNNs) and Gradient-weighted Class Activation Mapping (Grad-CAM), respectively. We built and evaluated different predictive models using a large set of fundus images, collected and labeled by ophthalmologists at Samsung Medical Center (SMC). Our experimental results demonstrate that our most effective predictive model is able to achieve a high diagnosis accuracy of 96%, as well as a high sensitivity of 96% and a high specificity of 100% for Dataset-Optic Disc (OD), a set of center-cropped fundus images highlighting the optic disc. Furthermore, we present Medinoid, a publicly-available prototype web application for computer-aided diagnosis and localization of glaucoma, integrating our most effective predictive model in its back-end

    Automatic CDR Estimation for Early Glaucoma Diagnosis

    Get PDF

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    Desarrollo y evaluación de un sistema de ayuda a la decisión médica móvil en iOS para el diagnóstico de enfermedades del polo anterior del ojo

    Get PDF
    El uso de aplicaciones móviles sigue aumentando año tras año, y este aumento se aprecia aún más en la población adulta, es decir, personas que no son nativas digitales pero que valoran el potencial que tiene el uso de estas herramientas en su vida. Y este aumento no solo se produce en el ámbito personal, sino que los profesionales en su vida laboral cada vez recurren más a la utilización de aplicaciones móviles para desarrollar su actividad profesional. Un sector que basa la mejora del desarrollo de sus actividades en los avances tecnológicos es la medicina. Los profesionales médicos utilizan sistemas en su día a día que les ayudan a desempeñar sus tareas, como son los registros electrónicos de historiales clínicos, gestión de citas médicas, o sistemas de ayuda en el diagnóstico. Estos últimos cada vez son más demandados, ya que una de las tareas principales que desempeñan los médicos es el diagnóstico de enfermedades, y en ciertas ocasiones requieren ayuda externa para poder llevarlo a cabo, sobre todo en casos en que no se es especialista en la materia o en que la dificultad de dominio de la especialidad médica es elevada. Este escenario se plantea en los servicios sanitarios de atención primaria, donde un buen manejo de enfermedades es crucial, tanto para el paciente como para el sistema sanitario. La especialidad de oftalmología es una de las más complicadas, debido a la gran variedad de patologías que se presentan y a la delicadeza de los órganos tratados, ya que estas patologías afectan directamente en la calidad de vida del paciente. El objetivo de este trabajo es el desarrollo de la aplicación móvil OphthalDSS para el sistema operativo iOS basándose en la versión anterior de la aplicación con el mismo nombre, desarrollada para el sistema operativo Android. Esta aplicación pretende ayudar en el diagnóstico de enfermedades oculares del segmento anterior del ojo, además de ofrecer a los usuarios contenido educativo sobre las patologías. Para ello se estudiará el estado del arte en cuanto a literatura y aplicaciones móviles comerciales que dispongan de sistemas de ayuda en la toma de decisiones médicas en el campo de la oftalmología, y se tendrán en cuenta las valoraciones de la calidad de experiencia por parte de los estudiantes de medicina que pudieron probar la versión de OphthalDSS en el sistema operativo Android para el desarrollo de esta nueva versión.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaMáster en Ingeniería de Telecomunicació
    corecore