171 research outputs found

    Towards non-vascular fundus image analysis and disease detection

    Get PDF
    Assessment of retinal fundus image is very informative and preventive in early ocular disease detection. This non-invasive assessment of fundus images also helps in the early diagnosis of vascular diseases. This unique combination help in the early diagnosis of diseases. Applying image enhancement techniques with advanced Deep learning techniques helps to overcome such a challenging problem. Most Deep learning models give a diagnosis without attention to underlying pathological abnormalities. In this thesis, we tried to solve the problem in the same way as ophthalmologists and experts in the field approach the problem. We created models that can detect an Optic disc, Optic cup, and vascular regions in the image. This work can be integrated into any ocular disease detection, such as glaucoma, and vascular disease detection, such as diabetes. Extensive work is applied for better sampling when all models were suffering from a lack of data in the medical imaging field. The entire work on the retinal fundus image was in 2d images. In the extension of this work, we applied our knowledge to 3d MRI-Brain images. We attempt to predict attention scores in children, which is a big factor in the detection of kids with ADHD. But both work on fundus images and brain MRI images are under the umbrella of medical imaging. We believe this advancement in this line of research can be very valuable for future researchers in the area of automated medical imaging, especially in automated retinal disease diagnosis

    Deep Learning Techniques for Automated Analysis and Processing of High Resolution Medical Imaging

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] Medical imaging plays a prominent role in modern clinical practice for numerous medical specialties. For instance, in ophthalmology, different imaging techniques are commonly used to visualize and study the eye fundus. In this context, automated image analysis methods are key towards facilitating the early diagnosis and adequate treatment of several diseases. Nowadays, deep learning algorithms have already demonstrated a remarkable performance for different image analysis tasks. However, these approaches typically require large amounts of annotated data for the training of deep neural networks. This complicates the adoption of deep learning approaches, especially in areas where large scale annotated datasets are harder to obtain, such as in medical imaging. This thesis aims to explore novel approaches for the automated analysis of medical images, particularly in ophthalmology. In this regard, the main focus is on the development of novel deep learning-based approaches that do not require large amounts of annotated training data and can be applied to high resolution images. For that purpose, we have presented a novel paradigm that allows to take advantage of unlabeled complementary image modalities for the training of deep neural networks. Additionally, we have also developed novel approaches for the detailed analysis of eye fundus images. In that regard, this thesis explores the analysis of relevant retinal structures as well as the diagnosis of different retinal diseases. In general, the developed algorithms provide satisfactory results for the analysis of the eye fundus, even when limited annotated training data is available.[Resumen] Las técnicas de imagen tienen un papel destacado en la práctica clínica moderna de numerosas especialidades médicas. Por ejemplo, en oftalmología es común el uso de diferentes técnicas de imagen para visualizar y estudiar el fondo de ojo. En este contexto, los métodos automáticos de análisis de imagen son clave para facilitar el diagnóstico precoz y el tratamiento adecuado de diversas enfermedades. En la actualidad, los algoritmos de aprendizaje profundo ya han demostrado un notable rendimiento en diferentes tareas de análisis de imagen. Sin embargo, estos métodos suelen necesitar grandes cantidades de datos etiquetados para el entrenamiento de las redes neuronales profundas. Esto complica la adopción de los métodos de aprendizaje profundo, especialmente en áreas donde los conjuntos masivos de datos etiquetados son más difíciles de obtener, como es el caso de la imagen médica. Esta tesis tiene como objetivo explorar nuevos métodos para el análisis automático de imagen médica, concretamente en oftalmología. En este sentido, el foco principal es el desarrollo de nuevos métodos basados en aprendizaje profundo que no requieran grandes cantidades de datos etiquetados para el entrenamiento y puedan aplicarse a imágenes de alta resolución. Para ello, hemos presentado un nuevo paradigma que permite aprovechar modalidades de imagen complementarias no etiquetadas para el entrenamiento de redes neuronales profundas. Además, también hemos desarrollado nuevos métodos para el análisis en detalle de las imágenes del fondo de ojo. En este sentido, esta tesis explora el análisis de estructuras retinianas relevantes, así como el diagnóstico de diferentes enfermedades de la retina. En general, los algoritmos desarrollados proporcionan resultados satisfactorios para el análisis de las imágenes de fondo de ojo, incluso cuando la disponibilidad de datos de entrenamiento etiquetados es limitada.[Resumo] As técnicas de imaxe teñen un papel destacado na práctica clínica moderna de numerosas especialidades médicas. Por exemplo, en oftalmoloxía é común o uso de diferentes técnicas de imaxe para visualizar e estudar o fondo de ollo. Neste contexto, os métodos automáticos de análises de imaxe son clave para facilitar o diagn ostico precoz e o tratamento adecuado de diversas enfermidades. Na actualidade, os algoritmos de aprendizaxe profunda xa demostraron un notable rendemento en diferentes tarefas de análises de imaxe. Con todo, estes métodos adoitan necesitar grandes cantidades de datos etiquetos para o adestramento das redes neuronais profundas. Isto complica a adopción dos métodos de aprendizaxe profunda, especialmente en áreas onde os conxuntos masivos de datos etiquetados son máis difíciles de obter, como é o caso da imaxe médica. Esta tese ten como obxectivo explorar novos métodos para a análise automática de imaxe médica, concretamente en oftalmoloxía. Neste sentido, o foco principal é o desenvolvemento de novos métodos baseados en aprendizaxe profunda que non requiran grandes cantidades de datos etiquetados para o adestramento e poidan aplicarse a imaxes de alta resolución. Para iso, presentamos un novo paradigma que permite aproveitar modalidades de imaxe complementarias non etiquetadas para o adestramento de redes neuronais profundas. Ademais, tamén desenvolvemos novos métodos para a análise en detalle das imaxes do fondo de ollo. Neste sentido, esta tese explora a análise de estruturas retinianas relevantes, así como o diagnóstico de diferentes enfermidades da retina. En xeral, os algoritmos desenvolvidos proporcionan resultados satisfactorios para a análise das imaxes de fondo de ollo, mesmo cando a dispoñibilidade de datos de adestramento etiquetados é limitada

    Retinal Fundus Image Analysis for Diagnosis of Glaucoma: A Comprehensive Survey

    Full text link
    © 2016 IEEE. The rapid development of digital imaging and computer vision has increased the potential of using the image processing technologies in ophthalmology. Image processing systems are used in standard clinical practices with the development of medical diagnostic systems. The retinal images provide vital information about the health of the sensory part of the visual system. Retinal diseases, such as glaucoma, diabetic retinopathy, age-related macular degeneration, Stargardt's disease, and retinopathy of prematurity, can lead to blindness manifest as artifacts in the retinal image. An automated system can be used for offering standardized large-scale screening at a lower cost, which may reduce human errors, provide services to remote areas, as well as free from observer bias and fatigue. Treatment for retinal diseases is available; the challenge lies in finding a cost-effective approach with high sensitivity and specificity that can be applied to large populations in a timely manner to identify those who are at risk at the early stages of the disease. The progress of the glaucoma disease is very often quiet in the early stages. The number of people affected has been increasing and patients are seldom aware of the disease, which can cause delay in the treatment. A review of how computer-aided approaches may be applied in the diagnosis and staging of glaucoma is discussed here. The current status of the computer technology is reviewed, covering localization and segmentation of the optic nerve head, pixel level glaucomatic changes, diagonosis using 3-D data sets, and artificial neural networks for detecting the progression of the glaucoma disease

    Statistical atlas-based descriptor for an early detection of optic disc abnormalities

    Get PDF
    Optic disc (OD) appearance in fundus images is one of the clinical indicators considered in the assessment of retinal diseases such as glaucoma. The cup-to-disc ratio (CDR) is the most common clinical measurement used to characterize glaucoma. However, the CDR only evaluates the relative sizes of the cup and the OD via their diameters. We propose to construct an atlas-based shape descriptor (ASD) to statistically characterize the geometric deformations of the OD shape and of the blood vessels' configuration inside the OD region. A local representation of the OD region is proposed to construct a well-defined statistical atlas using nonlinear registration and statistical analysis of deformation fields. The shape descriptor is defined as being composed of several statistical measures from the atlas. Analysis of the average model and its principal modes of deformation are performed on a healthy population. The components of the ASD show a significant difference between pathological and healthy ODs. We show that the ASD is able to characterize healthy and glaucomatous OD regions. The deviation map extracted from the atlas can be used to assist clinicians in an early detection of deformation abnormalities in the OD region

    Deep learning analysis of eye fundus images to support medical diagnosis

    Get PDF
    Machine learning techniques have been successfully applied to support medical decision making of cancer, heart diseases and degenerative diseases of the brain. In particular, deep learning methods have been used for early detection of abnormalities in the eye that could improve the diagnosis of different ocular diseases, especially in developing countries, where there are major limitations to access to specialized medical treatment. However, the early detection of clinical signs such as blood vessel, optic disc alterations, exudates, hemorrhages, drusen, and microaneurysms presents three main challenges: the ocular images can be affected by noise artifact, the features of the clinical signs depend specifically on the acquisition source, and the combination of local signs and grading disease label is not an easy task. This research approaches the problem of combining local signs and global labels of different acquisition sources of medical information as a valuable tool to support medical decision making in ocular diseases. Different models for different eye diseases were developed. Four models were developed using eye fundus images: for DME, it was designed a two-stages model that uses a shallow model to predict an exudate binary mask. Then, the binary mask is stacked with the raw fundus image into a 4-channel array as an input of a deep convolutional neural network for diabetic macular edema diagnosis; for glaucoma, it was developed three deep learning models. First, it was defined a deep learning model based on three-stages that contains an initial stage for automatically segment two binary masks containing optic disc and physiological cup segmentation, followed by an automatic morphometric features extraction stage from previous segmentations, and a final classification stage that supports the glaucoma diagnosis with intermediate medical information. Two late-data-fusion methods that fused morphometric features from cartesian and polar segmentation of the optic disc and physiological cup with features extracted from raw eye fundus images. On the other hand, two models were defined using optical coherence tomography. First, a customized convolutional neural network termed as OCT-NET to extract features from OCT volumes to classify DME, DR-DME and AMD conditions. In addition, this model generates images with highlighted local information about the clinical signs, and it estimates the number of slides inside a volume with local abnormalities. Finally, a 3D-Deep learning model that uses OCT volumes as an input to estimate the retinal thickness map useful to grade AMD. The methods were systematically evaluated using ten free public datasets. The methods were compared and validated against other state-of-the-art algorithms and the results were also qualitatively evaluated by ophthalmology experts from Fundación Oftalmológica Nacional. In addition, the proposed methods were tested as a diagnosis support tool of diabetic macular edema, glaucoma, diabetic retinopathy and age-related macular degeneration using two different ocular imaging representations. Thus, we consider that this research could be potentially a big step in building telemedicine tools that could support medical personnel for detecting ocular diseases using eye fundus images and optical coherence tomography.Las técnicas de aprendizaje automático se han aplicado con éxito para apoyar la toma de decisiones médicas sobre el cáncer, las enfermedades cardíacas y las enfermedades degenerativas del cerebro. En particular, se han utilizado métodos de aprendizaje profundo para la detección temprana de anormalidades en el ojo que podrían mejorar el diagnóstico de diferentes enfermedades oculares, especialmente en países en desarrollo, donde existen grandes limitaciones para acceder a tratamiento médico especializado. Sin embargo, la detección temprana de signos clínicos como vasos sanguíneos, alteraciones del disco óptico, exudados, hemorragias, drusas y microaneurismas presenta tres desafíos principales: las imágenes oculares pueden verse afectadas por artefactos de ruido, las características de los signos clínicos dependen específicamente de fuente de adquisición, y la combinación de signos locales y clasificación de la enfermedad no es una tarea fácil. Esta investigación aborda el problema de combinar signos locales y etiquetas globales de diferentes fuentes de adquisición de información médica como una herramienta valiosa para apoyar la toma de decisiones médicas en enfermedades oculares. Se desarrollaron diferentes modelos para diferentes enfermedades oculares. Se desarrollaron cuatro modelos utilizando imágenes de fondo de ojo: para DME, se diseñó un modelo de dos etapas que utiliza un modelo superficial para predecir una máscara binaria de exudados. Luego, la máscara binaria se apila con la imagen de fondo de ojo original en una matriz de 4 canales como entrada de una red neuronal convolucional profunda para el diagnóstico de edema macular diabético; para el glaucoma, se desarrollaron tres modelos de aprendizaje profundo. Primero, se definió un modelo de aprendizaje profundo basado en tres etapas que contiene una etapa inicial para segmentar automáticamente dos máscaras binarias que contienen disco óptico y segmentación fisiológica de la copa, seguido de una etapa de extracción de características morfométricas automáticas de segmentaciones anteriores y una etapa de clasificación final que respalda el diagnóstico de glaucoma con información médica intermedia. Dos métodos de fusión de datos tardíos que fusionaron características morfométricas de la segmentación cartesiana y polar del disco óptico y la copa fisiológica con características extraídas de imágenes de fondo de ojo crudo. Por otro lado, se definieron dos modelos mediante tomografía de coherencia óptica. Primero, una red neuronal convolucional personalizada denominada OCT-NET para extraer características de los volúmenes OCT para clasificar las condiciones DME, DR-DME y AMD. Además, este modelo genera imágenes con información local resaltada sobre los signos clínicos, y estima el número de diapositivas dentro de un volumen con anomalías locales. Finalmente, un modelo de aprendizaje 3D-Deep que utiliza volúmenes OCT como entrada para estimar el mapa de espesor retiniano útil para calificar AMD. Los métodos se evaluaron sistemáticamente utilizando diez conjuntos de datos públicos gratuitos. Los métodos se compararon y validaron con otros algoritmos de vanguardia y los resultados también fueron evaluados cualitativamente por expertos en oftalmología de la Fundación Oftalmológica Nacional. Además, los métodos propuestos se probaron como una herramienta de diagnóstico de edema macular diabético, glaucoma, retinopatía diabética y degeneración macular relacionada con la edad utilizando dos representaciones de imágenes oculares diferentes. Por lo tanto, consideramos que esta investigación podría ser potencialmente un gran paso en la construcción de herramientas de telemedicina que podrían ayudar al personal médico a detectar enfermedades oculares utilizando imágenes de fondo de ojo y tomografía de coherencia óptica.Doctorad

    Reconstruction-driven Dynamic Refinement based Unsupervised Domain Adaptation for Joint Optic Disc and Cup Segmentation

    Full text link
    Glaucoma is one of the leading causes of irreversible blindness. Segmentation of optic disc (OD) and optic cup (OC) on fundus images is a crucial step in glaucoma screening. Although many deep learning models have been constructed for this task, it remains challenging to train an OD/OC segmentation model that could be deployed successfully to different healthcare centers. The difficulties mainly comes from the domain shift issue, i.e., the fundus images collected at these centers usually vary greatly in the tone, contrast, and brightness. To address this issue, in this paper, we propose a novel unsupervised domain adaptation (UDA) method called Reconstruction-driven Dynamic Refinement Network (RDR-Net), where we employ a due-path segmentation backbone for simultaneous edge detection and region prediction and design three modules to alleviate the domain gap. The reconstruction alignment (RA) module uses a variational auto-encoder (VAE) to reconstruct the input image and thus boosts the image representation ability of the network in a self-supervised way. It also uses a style-consistency constraint to force the network to retain more domain-invariant information. The low-level feature refinement (LFR) module employs input-specific dynamic convolutions to suppress the domain-variant information in the obtained low-level features. The prediction-map alignment (PMA) module elaborates the entropy-driven adversarial learning to encourage the network to generate source-like boundaries and regions. We evaluated our RDR-Net against state-of-the-art solutions on four public fundus image datasets. Our results indicate that RDR-Net is superior to competing models in both segmentation performance and generalization abilit
    corecore