168 research outputs found

    Diabetic Reinopathy Classification using Deep Learning

    Get PDF
    With diabetes growing at an alarming rate, changes in the retina of diabetic patients causes a condition called diabetic retinopathy which eventually leads to blindness. Early detection of diabetic retinopathy is the best way to provide good timely treatment and thus prevent blindness. Many developed countries have put forward well-structured screening programs which screens every person diagnosed with diabetes at regular intervals. However, the cost of running these programs is increasing with ever increasing disease burden. These screening programs require well trained opticians or ophthalmologist which are expensive especially in developing countries. A global shortage of health care professionals is putting a pressing need to develop fast and efficient screening methods. Using artificial intelligent screening tools will help process and generate a plan for the patients thus skipping the health care provider needed to just classify the disease and will lower the burden on health care professional’s shortage significantly. A plethora of research exists to classify severity of diabetic retinopathy using traditional and end to end methods. In this thesis, we first trained and compared the performance of lightweight architecture MobileNetV2 with other classifiers like DenseNet121 and VGG16 using the Retinal fundus APTOS 2019 Kaggle dataset. We experimented with different image reprocessing techniques and employed various hyperparameter tuning techniques, and found the lightweight architecture MobileNetV2 to give better results in terms of AUC score which defines the ability of the classifier to separate between the classes. We then trained MobileNetV2 using handpicked custom dataset which was an amalgamation of 3 different publicly available datasets viz. the EyePacs Kaggle dataset, the APTOS 2019 Blindness detection dataset and the Messidor2 dataset. We enhanced the retinal features using bio-inspired retinal filters and tuned the hyper-parameters to achieve an accuracy of 91.68% and AUC score of 0.9 when tested on unseen data. The macro precision, recall, and f1-scores are 77.6%, 83.1%, and 80.1% respectively. Our results demonstrate that our computational efficient light weight model achieves promising results and can be deployed as a mobile application for clinical testing

    Development of a virtual reality ophthalmoscope prototype

    Get PDF
    El examen visual es un procedimiento importante que proporciona información acerca de la condición del fondo de ojo, permitiendo la observación e identificación de anomalías, como ceguera, diabetes, hipertensión, sangrados resultado de traumas, entre otros. Un apropiado examen permite identificar condiciones que pueden comprometer la visión, sin embargo, éste es desafiante porque requiere de una práctica extensiva para desarrollar las habilidades para una adecuada interpretación que permiten la identificación exitosa de anomalías en el fondo de ojo con un oftalmoscopio. Para ayudar a los practicantes a desarrollar sus habilidades para la examinación ocular, los dispositivos de simulación médica están ofreciendo oportunidades de entrenamiento para explorar numerosos casos del ojo en escenarios simulados, controlados y monitoreados. Sin embargo, los avances en la simulación del ojo han llevado a costosos simuladores con acceso limitado ya que la práctica se mantiene con interacciones para un aprendiz y en algunos casos, ofreciendo al entrenador la visión para la interacción del practicante. Gracias a los costos asociados a la simulación médica, hay varias alternativas reportadas en la revisión de la literatura, presentando aproximaciones efectividad-costo y nivel de consumo para maximizar la efectividad del entrenamiento para el examen de ojo. En este trabajo se presenta el desarrollo de una aplicación con realidad aumentada inmersiva y no-inmersiva, para dispositivos móviles Android con interacciones a través de un controlador impreso en 3D con componentes electrónicos embebidos que imitan a un oftalmoscopio real. La aplicación presenta a los usuarios un paciente virtual visitando al doctor para un examen ocular, y requiere que el aprendiz ejecute el examen de fondo de ojo haciendo diagnosticando sus hallazgos. La versión inmersiva de la aplicación requiere del uso de un casco de realidad virtual, además del prototipo 3D de oftalmoscopio, mientras que la no inmersiva, requiere únicamente del marcador dentro del campo de visión del dispositivo móvil.The eye examination is an important procedure that provides information about the condition of the eye by observing its fundus, thus allowing the observation and identification of abnormalities, such as blindness, diabetes, hypertension, and bleeding resulting from traumas among others. A proper eye fundus examination allows identifying conditions that may compromise the sight; however, the eye examination is challenging because it requires extensive practice to develop adequate interpretation skills that allows successfully identifying abnormalities at the back of the eye seen through an ophthalmoscope. To assist trainees in developing the eye examination skills, medical simulation devices are providing training opportunities to explore numerous eye cases in simulated, controlled, and monitored scenarios. However, advances in eye simulation have led to expensive simulators with limited access as practice remain conducted on a one trainee basis in some cases offering the instructor a view of the trainee interactions. Because of the costs associated with medical simulation, there various alternatives reported in the literature review presenting cost-effective and consumerlevel approaches to maximize the effectiveness of the eye examination training. In this work, we present the development an immersive and non-immersive augmented reality application for Android mobile devices with interactions through a 3D printed controller with embedded electronic components that mimics a real ophthalmoscope. The application presents users with a virtual patient visiting the doctor for an eye examination, and requires the trainees to perform the eye fundus examination and diagnose their findings. The immersive version of the application requires the trainees to wear a mobile VR headset and hold the 3D printed ophthalmoscope, while the non-immersive version requires them to hold the marker within the field of view of the mobile device.Pregrad

    Numerical Simulation and Design of Computer Aided Diabetic Retinopathy Using Improved Convolutional Neural Network

    Get PDF
    The health sector is entirely different from other sectors. It is a high priority department with the highest quality of care and quality, regardless of cost. It does not meet social standards even though it absorbs a lot of budget. Health specialists interpret much of the medical evidence. Due to its subjectivity, complexity of images, broad differences among various interpreters and exhaustion, the image interpretation of human experts is very restricted. It also offers an exciting solution with good medical imaging accuracy following in-depth learning in other practical applications and is considered an important tool in future healthcare applications. This chapter addresses the most advanced and optimised deep learning architecture for segmentation and classification of medical pictures. We addressed the complexities of healthcare imaging and open science based on profound learning in the previous segment. Diabetic retinopathy automated diagnosis is crucial because it is the primary cause of permanent vision loss in working-age people in developed countries. The early identification of diabetic retinopathy is extremely helpful in clinical treatment; although many different methods of extracting functions were suggested, the classification task of retinal images is still quite tedious for even those professional clinicians. Recently, in contrast with previous feature-based image-classification approaches, deep-convolutioned neural networks have demonstrated superior performance in image classification. Therefore in this research, we explored the use of deep-seated neural network techniques to identify diabetic retinopathy automatically with Color Fundus images in our datasets that are superior to classical ones. Deep convolutionary neural systems have since late been seen better output in the analysed image arrangement than previous components which have combined image order techniques that are focused on the crafting method. In this investigation, we studied the use of profound convolutionary strategy of the neural system to naturally classify diabetic retinopathy, using shading fundus images to achieve high precision in our datasets

    A Hybrid Convolutional Neural Network Model for Automatic Diabetic Retinopathy Classification From Fundus Images

    Get PDF
    Objective: Diabetic Retinopathy (DR) is a retinal disease that can cause damage to blood vessels in the eye, that is the major cause of impaired vision or blindness, if not treated early. Manual detection of diabetic retinopathy is time-consuming and prone to human error due to the complex structure of the eye. Methods & Results: various automatic techniques have been proposed to detect diabetic retinopathy from fundus images. However, these techniques are limited in their ability to capture the complex features underlying diabetic retinopathy, particularly in the early stages. In this study, we propose a novel approach to detect diabetic retinopathy using a convolutional neural network (CNN) model. The proposed model extracts features using two different deep learning (DL) models, Resnet50 and Inceptionv3, and concatenates them before feeding them into the CNN for classification. The proposed model is evaluated on a publicly available dataset of fundus images. The experimental results demonstrate that the proposed CNN model achieves higher accuracy, sensitivity, specificity, precision, and f1 score compared to state-of-the-art methods, with respective scores of 96.85%, 99.28%, 98.92%, 96.46%, and 98.65%.©2023 The Authors. Published by IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/fi=vertaisarvioitu|en=peerReviewed

    Visual Impairment and Blindness

    Get PDF
    Blindness and vision impairment affect at least 2.2 billion people worldwide with most individuals having a preventable vision impairment. The majority of people with vision impairment are older than 50 years, however, vision loss can affect people of all ages. Reduced eyesight can have major and long-lasting effects on all aspects of life, including daily personal activities, interacting with the community, school and work opportunities, and the ability to access public services. This book provides an overview of the effects of blindness and visual impairment in the context of the most common causes of blindness in older adults as well as children, including retinal disorders, cataracts, glaucoma, and macular or corneal degeneration

    Automatic Segmentation of Retinal Vasculature

    Full text link
    Segmentation of retinal vessels from retinal fundus images is the key step in the automatic retinal image analysis. In this paper, we propose a new unsupervised automatic method to segment the retinal vessels from retinal fundus images. Contrast enhancement and illumination correction are carried out through a series of image processing steps followed by adaptive histogram equalization and anisotropic diffusion filtering. This image is then converted to a gray scale using weighted scaling. The vessel edges are enhanced by boosting the detail curvelet coefficients. Optic disk pixels are removed before applying fuzzy C-mean classification to avoid the misclassification. Morphological operations and connected component analysis are applied to obtain the segmented retinal vessels. The performance of the proposed method is evaluated using DRIVE database to be able to compare with other state-of-art supervised and unsupervised methods. The overall segmentation accuracy of the proposed method is 95.18% which outperforms the other algorithms.Comment: Published at IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 201

    Detection of Macula and Recognition of Aged-Related Macular Degeneration in Retinal Fundus Images

    Get PDF
    In aged people, the central vision is affected by Age-Related Macular Degeneration (AMD). From the digital retinal fundus images, AMD can be recognized because of the existence of Drusen, Choroidal Neovascularization (CNV), and Geographic Atrophy (GA). It is time-consuming and costly for the ophthalmologists to monitor fundus images. A monitoring system for automated digital fundus photography can reduce these problems. In this paper, we propose a new macula detection system based on contrast enhancement, top-hat transformation, and the modified Kirsch template method. Firstly, the retinal fundus image is processed through an image enhancement method so that the intensity distribution is improved for finer visualization. The contrast-enhanced image is further improved using the top-hat transformation function to make the intensities level differentiable between the macula and different sections of images. The retinal vessel is enhanced by employing the modified Kirsch's template method. It enhances the vasculature structures and suppresses the blob-like structures. Furthermore, the OTSU thresholding is used to segment out the dark regions and separate the vessel to extract the candidate regions. The dark region and the background estimated image are subtracted from the extracted blood vessels image to obtain the exact location of the macula. The proposed method applied on 1349 images of STARE, DRIVE, MESSIDOR, and DIARETDB1 databases and achieved the average sensitivity, specificity, accuracy, positive predicted value, F1 score, and area under curve of 97.79 %, 97.65 %, 97.60 %, 97.38 %, 97.57 %, and 96.97 %, respectively. Experimental results reveal that the proposed method attains better performance, in terms of visual quality and enriched quantitative analysis, in comparison with eminent state-of-the-art methods
    corecore