116 research outputs found

    A deep learning model to assess and enhance eye fundus image quality

    Get PDF
    Engineering aims to design, build, and implement solutions that will increase and/or improve the life quality of human beings. Likewise, from medicine, solutions are generated for the same purposes, enabling these two knowledge areas to converge for a common goal. With the thesis work “A Deep Learning Model to Assess and Enhance Eye Fundus Image Quality", a model was proposed and implement a model that allows us to evaluate and enhance the quality of fundus images, which contributes to improving the efficiency and effectiveness of a subsequent diagnosis based on these images. On the one hand, for the evaluation of these images, a model based on a lightweight convolutional neural network architecture was developed, termed as Mobile Fundus Quality Network (MFQ-Net). This model has approximately 90% fewer parameters than those of the latest generation. For its evaluation, the Kaggle public data set was used with two sets of quality annotations, binary (good and bad) and three classes (good, usable and bad) obtaining an accuracy of 0.911 and 0.856 in the binary mode and three classes respectively in the classification of the fundus image quality. On the other hand, a method was developed for eye fundus quality enhancement termed as Pix2Pix Fundus Oculi Quality Enhancement (P2P-FOQE). This method is based on three stages which are; pre-enhancement: for color adjustment, enhancement: with a Pix2Pix network (which is a Conditional Generative Adversarial Network) as the core of the method and post-enhancement: which is a CLAHE adjustment for contrast and detail enhancement. This method was evaluated on a subset of quality annotations for the Kaggle public database which was re-classified for three categories (good, usable, and poor) by a specialist from the Fundación Oftalmolóica Nacional. With this method, the quality of these images for the good class was improved by 72.33%. Likewise, the image quality improved from the bad class to the usable class, and from the bad class to the good class by 56.21% and 29.49% respectively.La ingeniería busca diseñar, construir e implementar soluciones que permitan aumentar y/o mejorar la calidad de vida de los seres humanos. Igualmente, desde la medicina son generadas soluciones con los mismos fines, posibilitando que estas dos áreas del conocimiento convergan por un bien común. Con el trabajo de tesis “A Deep Learning Model to Assess and Enhance Eye Fundus Image Quality”, se propuso e implementó un modelo que permite evaluar y mejorar la calidad de las imágenes de fondo de ojo, lo cual contribuye a mejorar la eficiencia y eficacia de un posterior diagnóstico basado en estas imágenes. Para la evaluación de estás imágenes, se desarrolló un modelo basado en una arquitectura de red neuronal convolucional ligera, la cual fue llamada Mobile Fundus Quality Network (MFQ-Net). Este modelo posee aproximadamente 90% menos parámetros que aquellos de última generación. Para su evaluación se utilizó el conjunto de datos públicos de Kaggle con dos sets de anotaciones de calidad, binario (buena y mala) y tres clases (buena, usable y mala) obteniendo en la tareas de clasificación de la calidad de la imagen de fondo de ojo una exactitud de 0.911 y 0.856 en la modalidad binaria y tres clases respectivamente. Por otra parte, se desarrolló un método el cual realiza una mejora de la calidad de imágenes de fondo de ojo llamado Pix2Pix Fundus Oculi Quality Enhacement (P2P-FOQE). Este método está basado en tres etapas las cuales son; premejora: para ajuste de color, mejora: con una red Pix2Pix (la cual es una Conditional Generative Adversarial Network) como núcleo del método y postmejora: la cual es un ajuste CLAHE para contraste y realce de detalles. Este método fue evaluado en un subconjunto de anotaciones de calidad para la base de datos pública de Kaggle el cual fue re clasificado por un especialista de la Fundación Oftalmológica Nacional para tres categorías (buena, usable y mala). Con este método fue mejorada la calidad de estas imágenes para la clase buena en un 72,33%. Así mismo, la calidad de imagen mejoró de la clase mala a la clase utilizable, y de la clase mala a clase buena en 56.21% y 29.49% respectivamente.Línea de investigación: Visión por computadora para análisis de imágenes médicasMaestrí

    Multi-modal imaging in Ophthalmology: image processing methods for improving intra-ocular tumor treatment via MRI and Fundus image photography

    Get PDF
    The most common ocular tumors in the eye are retinoblastoma and uveal melanoma, affecting children and adults respectively, and spreading throughout the body if left untreated. To date, detection and treatment of such tumors rely mainly on two imaging modalities: Fundus Image Photography (Fundus) and Ultrasound (US), however, other image modalities such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are key to confirm a possible tumor spread outside the eye cavity. Current procedures to select the best treatment and follow-up are based on manual multimodal measures taken by clinicians. These tasks often require the manual annotation and delineation of eye structures and tumors, a rather tedious and time consuming endeavour, to be performed in multiple medical sequences simultaneously. ################################ This work presents a new set of image processing methods for improving multimodal evaluation of intra-ocular tumors in 3D MRI and 2D Fundus. We first introduce a novel technique for the automatic delineation of ocular structures and tumors in the 3D MRI. To this end, we present an Active Shape Model (ASM) built out of a dataset of healthy patients to demonstrate that the segmentation of ocular structures (e.g. the lens, the vitreous humor, the cornea and the sclera) can be performed in an accurate and robust manner. To validate these findings, we introduce a set of experiments to test the model performance on eyes with presence of endophytic retinoblastoma, and discover that the segmentation of healthy eye structures is possible, regardless of the presence of the tumor inside the eyes. Moreover, we propose a specific set of Eye Patient-specific eye features that can be extracted -- Le rétinoblastome et le mélanome uvéal sont les types de cancer oculaire les plus communs, touchant les enfants et adultes respectivement, et peuvent se répandre à travers l’organisme s’ils ne sont pas traités. Actuellement, le traitement pour la détection du rétinoblastome se base essentiellement à partir de deux modalites d’imagerie fond d’œil (Fundus) et l’ultrason (US). Cependant, d’autres modalités d’imagerie comme l’Imagerie par Résonance magnétique (IRM) et la Tomodensitométrie (TDM) sont clé pour confirmer la possible expansion du cancer en dehors de la cavité oculaire. Les techniques utilisées pour déterminer la tumeur oculaire, ainsi que le choix du traitement, se basent sur des mesures multimodales réalisées de manière manuelle par des médecins. Cette méthodologie manuelle est appliquée quotidiennement et continuellement pendant toute la durée de la maladie. Ce processus nécessite souvent la délinéation manuelle des structures ocularies et de la tumeur, un mécanisme laborieux et long, effectuée dans des multiples séquences médicales simultanées (par exemple : T1-weighted et T2-weighted IRM ...) qui augmentent la difficulté pour évaluer la maladie. Le présent travail présente une nouvelle série de techniques permettant d’améliorer l´évaluation multimodale de tumeurs oculaires en IRM et Fundus. Dans un premier temps, nous intro- duisons une méthode qui assure la délinéation automatique de la structure oculaire et de la tumeur dans un IRM 3D. Pour cela, nous présentons un Active Shape Model (ASM) construite à partir d’un ensemble de données de patients en bonne santé pour prouver que la segmenta- tion automatique de la structure oculaire (par exemple : le cristallin, l´humeur aqueuse, la cornée et la sclère) peut être réalisée de manière précise et robuste. Afin de valider ces résultats, nous introduisons un ensemble d’essais pour tester la performance du modèle par rapport à des yeux de patients affectés pathologiquement par un rétinoblastome, et démontrons que la segmentation de la structure oculaire d’un œil sain est possible, indépendamment de la présence d’une tumeur à l’intérieur des yeux. De plus, nous proposons une caractérisation spécifique du patient-specific eye features qui peuvent être utile pour la segmentation de l’œil dans l’IRM 3D, fournissant des formes riches et une information importante concernant le tissu pathologique noyé dans la structure oculaire de l’œil sain. Cette information est ultérieurement utilisée pour entrainer un ensemble de classificateurs (Convolutional Neural Network (CNN), Random Forest, . . . ) qui réalise la segmentation automatique de tumeurs oculaires à l’intérieur de l’œil. En outre, nous explorons une nouvelle méthode pour évaluer des multitudes de séquences d’images de manière simultanée, fournissant aux médecins un outil pour observer l’extension de la tumeur dans le fond d’œil et l’IRM. Pour cela, nous combinons la segmentation auto- matique de l’œil de l’IRM selon la description ci-dessus et nous proposons une delineation manuelle de tumeurs oculaires dans le fond d’œil. Ensuite, nous recalons ces deux modalités d’imagerie avec une nouvelle base de points de repère et nous réalisons la fusion des deux modalités. Nous utilisons cette nouvelle méthode pour (i) améliorer la qualité de la délinéation dans l’IRM et pour (ii) utiliser la projection arrière de la tumeur pour transporter de riches me- sures volumétriques de l’IRM vers le fond d’œil, en créant une nouvelle forme 3D représentant le fond d’œil 2D dans une méthode que nous appelons Topographic Fundus Mapping. Pour tous les tests et contributions, nous validons les résultats avec une base de données d’IRM et une base de données d’images pathologiques du fond d’œil de rétinoblastome

    NON-INVASIVE IMAGE ENHANCEMENT OF COLOUR RETINAL FUNDUS IMAGES FOR A COMPUTERISED DIABETIC RETINOPATHY MONITORING AND GRADING SYSTEM

    Get PDF
    Diabetic Retinopathy (DR) is a sight threatening complication due to diabetes mellitus affecting the retina. The pathologies of DR can be monitored by analysing colour fundus images. However, the low and varied contrast between retinal vessels and the background in colour fundus images remains an impediment to visual analysis in particular in analysing tiny retinal vessels and capillary networks. To circumvent this problem, fundus fluorescein angiography (FF A) that improves the image contrast is used. Unfortunately, it is an invasive procedure (injection of contrast dyes) that leads to other physiological problems and in the worst case may cause death. The objective of this research is to develop a non-invasive digital Image enhancement scheme that can overcome the problem of the varied and low contrast colour fundus images in order that the contrast produced is comparable to the invasive fluorescein method, and without introducing noise or artefacts. The developed image enhancement algorithm (called RETICA) is incorporated into a newly developed computerised DR system (called RETINO) that is capable to monitor and grade DR severity using colour fundus images. RETINO grades DR severity into five stages, namely No DR, Mild Non Proliferative DR (NPDR), Moderate NPDR, Severe NPDR and Proliferative DR (PDR) by enhancing the quality of digital colour fundus image using RETICA in the macular region and analysing the enlargement of the foveal avascular zone (F AZ), a region devoid of retinal vessels in the macular region. The importance of this research is to improve image quality in order to increase the accuracy, sensitivity and specificity of DR diagnosis, and to enable DR grading through either direct observation or computer assisted diagnosis system

    A systematic collection of medical image datasets for deep learning

    Get PDF
    The astounding success made by artificial intelligence in healthcare and other fields proves that it can achieve human-like performance. However, success always comes with challenges. Deep learning algorithms are data dependent and require large datasets for training. Many junior researchers face a lack of data for a variety of reasons. Medical image acquisition, annotation, and analysis are costly, and their usage is constrained by ethical restrictions. They also require several other resources, such as professional equipment and expertise. That makes it difficult for novice and non-medical researchers to have access to medical data. Thus, as comprehensively as possible, this article provides a collection of medical image datasets with their associated challenges for deep learning research. We have collected the information of approximately 300 datasets and challenges mainly reported between 2007 and 2020 and categorized them into four categories: head and neck, chest and abdomen, pathology and blood, and others. The purpose of our work is to provide a list, as up-to-date and complete as possible, that can be used as a reference to easily find the datasets for medical image analysis and the information related to these datasets

    Technological Advances in the Diagnosis and Management of Pigmented Fundus Tumours

    Get PDF
    Choroidal naevi are the most common intraocular tumour. They can be pigmented or non-pigmented and have a predilection for the posterior uvea. The majority remain undetected and cause no harm but are increasingly found on routine community optometry examinations. Rarely does a naevus demonstrate growth or the onset of suspicious features to fulfil the criteria for a malignant melanoma. Because of this very small risk, optometrists commonly refer these patients to hospital eye units for a second opinion, triggering specialist examination and investigation, causing significant anxiety to patients and stretching medical resources. This PhD thesis introduces the MOLES acronym and scoring system that has been devised to categorise the risk of malignancy in choroidal melanocytic tumours according to Mushroom tumour shape, Orange pigment, Large tumour size, Enlarging tumour and Subretinal fluid. This is a simplified system that can be used without sophisticated imaging, and hence its main utility lies in the screening of patients with choroidal pigmented lesions in the community and general ophthalmology clinics. Under this system, lesions were categorised by a scoring system as ‘common naevus’, ‘low-risk naevus’, ‘high-risk naevus’ and ‘probable melanoma.’ According to the sum total of the scores, the MOLES system correlates well with ocular oncologists’ final diagnosis. The PhD thesis also describes a model of managing such lesions in a virtual pathway, showing that images of choroidal naevi evaluated remotely using a decision-making algorithm by masked non-medical graders or masked ophthalmologists is safe. This work prospectively validates a virtual naevus clinic model focusing on patient safety as the primary consideration. The idea of a virtual naevus clinic as a fast, one-stop, streamlined and comprehensive service is attractive for patients and healthcare systems, including an optimised patient experience with reduced delays and inconvenience from repeated visits. A safe, standardised model ensures homogeneous management of cases, appropriate and prompt return of care closer to home to community-based optometrists. This research work and strategies, such as the MOLES scoring system for triage, could empower community-based providers to deliver management of benign choroidal naevi without referral to specialist units. Based on the positive outcome of this prospective study and the MOLES studies, a ‘Virtual Naevus Clinic’ has been designed and adapted at Moorfields Eye Hospital (MEH) to prove its feasibility as a response to the COVID-19 pandemic, and with the purpose of reducing in-hospital patient journey times and increasing the capacity of the naevus clinics, while providing safe and efficient clinical care for patients. This PhD chapter describes the design, pathways, and operating procedures for the digitally enabled naevus clinics in Moorfields Eye Hospital, including what this service provides and how it will be delivered and supported. The author will share the current experience and future plan. Finally, the PhD thesis will cover a chapter that discusses the potential role of artificial intelligence (AI) in differentiating benign choroidal naevus from choroidal melanoma. The published clinical and imaging risk factors for malignant transformation of choroidal naevus will be reviewed in the context of how AI applied to existing ophthalmic imaging systems might be able to determine features on medical images in an automated way. The thesis will include current knowledge to date and describe potential benefits, limitations and key issues that could arise with this technology in the ophthalmic field. Regulatory concerns will be addressed with possible solutions on how AI could be implemented in clinical practice and embedded into existing imaging technology with the potential to improve patient care and the diagnostic process. The PhD will also explore the feasibility of developed automated deep learning models and investigate the performance of these models in diagnosing choroidal naevomelanocytic lesions based on medical imaging, including colour fundus and autofluorescence fundus photographs. This research aimed to determine the sensitivity and specificity of an automated deep learning algorithm used for binary classification to differentiate choroidal melanomas from choroidal naevi and prove that a differentiation concept utilising a machine learning algorithm is feasible

    Development of an automated screening tool for diabetic retinopathy using artificial intelligence

    Get PDF
    Diabetic retinopathy is the commonest cause of blindness in the working age population in the Western world. It is widely recognised that screening for this treatable condition is highly cost effective. However, there is a shortage in the number of trained personnel required to screen for sight threatening forms of the disease. It has been shown that many of the features of diabetic retinopathy such as microaneurysms, cotton wool spots, exudates and haemorrhages can be identified automatically with high levels of sensitivity and specificity. This work describes the development of an automated computerised system for the screening of diabetic retinopathy through the integration of an artificial intelligent system and the development of custom written software (Diabetic Retinopathy Image Classification Programme) to enable image acquisition, image processing, neural network training and testing to be performed in a structured manner. A combination of conventional image processing and neural network methods are utilised for the identification of the basic features associated with the normal and diabetic fundus image. Preliminary investigations into the identification of sight-threatening features are also described. Identification of normal retinal vasculature and diabetic associated features was performed using three separately trained back-propagtion neural networks. Localisation of the optic disc and macula was achieved by region of interest pixel intensity scanning. Assessment of the optic disc for sight-threatening new vessel growth was performed by comparing the variance in circular intensity profiles of normal optic discs to the variance of those with neovascularisation. Patients were classified as having maculopathy if hard exudates were identified within one disc diameter of the fovea. The overall aim of this project is to develop an automated screening programme for diabetic retinopathy. The initial phase details the development and comparison of a range of algorithms for the detection of features associated with diabetic retinopathy. The final phase details the clinical evaluation of the current screening system

    NON-INVASIVE IMAGE ENHANCEMENT OF COLOUR RETINAL FUNDUS IMAGES FOR A COMPUTERISED DIABETIC RETINOPATHY MONITORING AND GRADING SYSTEM

    Get PDF
    Diabetic Retinopathy (DR) is a sight threatening complication due to diabetes mellitus affecting the retina. The pathologies of DR can be monitored by analysing colour fundus images. However, the low and varied contrast between retinal vessels and the background in colour fundus images remains an impediment to visual analysis in particular in analysing tiny retinal vessels and capillary networks. To circumvent this problem, fundus fluorescein angiography (FF A) that improves the image contrast is used. Unfortunately, it is an invasive procedure (injection of contrast dyes) that leads to other physiological problems and in the worst case may cause death. The objective of this research is to develop a non-invasive digital Image enhancement scheme that can overcome the problem of the varied and low contrast colour fundus images in order that the contrast produced is comparable to the invasive fluorescein method, and without introducing noise or artefacts. The developed image enhancement algorithm (called RETICA) is incorporated into a newly developed computerised DR system (called RETINO) that is capable to monitor and grade DR severity using colour fundus images. RETINO grades DR severity into five stages, namely No DR, Mild Non Proliferative DR (NPDR), Moderate NPDR, Severe NPDR and Proliferative DR (PDR) by enhancing the quality of digital colour fundus image using RETICA in the macular region and analysing the enlargement of the foveal avascular zone (F AZ), a region devoid of retinal vessels in the macular region. The importance of this research is to improve image quality in order to increase the accuracy, sensitivity and specificity of DR diagnosis, and to enable DR grading through either direct observation or computer assisted diagnosis system

    Artificial intelligence extension of the OSCAR-IB criteria

    Get PDF
    Artificial intelligence (AI)-based diagnostic algorithms have achieved ambitious aims through automated image pattern recognition. For neurological disorders, this includes neurodegeneration and inflammation. Scalable imaging technology for big data in neurology is optical coherence tomography (OCT). We highlight that OCT changes observed in the retina, as a window to the brain, are small, requiring rigorous quality control pipelines. There are existing tools for this purpose. Firstly, there are human-led validated consensus quality control criteria (OSCAR-IB) for OCT. Secondly, these criteria are embedded into OCT reporting guidelines (APOSTEL). The use of the described annotation of failed OCT scans advances machine learning. This is illustrated through the present review of the advantages and disadvantages of AI-based applications to OCT data. The neurological conditions reviewed here for the use of big data include Alzheimer disease, stroke, multiple sclerosis (MS), Parkinson disease, and epilepsy. It is noted that while big data is relevant for AI, ownership is complex. For this reason, we also reached out to involve representatives from patient organizations and the public domain in addition to clinical and research centers. The evidence reviewed can be grouped in a five-point expansion of the OSCAR-IB criteria to embrace AI (OSCAR-AI). The review concludes by specific recommendations on how this can be achieved practically and in compliance with existing guidelines
    corecore