375 research outputs found

    A deep learning model to assess and enhance eye fundus image quality

    Get PDF
    Engineering aims to design, build, and implement solutions that will increase and/or improve the life quality of human beings. Likewise, from medicine, solutions are generated for the same purposes, enabling these two knowledge areas to converge for a common goal. With the thesis work “A Deep Learning Model to Assess and Enhance Eye Fundus Image Quality", a model was proposed and implement a model that allows us to evaluate and enhance the quality of fundus images, which contributes to improving the efficiency and effectiveness of a subsequent diagnosis based on these images. On the one hand, for the evaluation of these images, a model based on a lightweight convolutional neural network architecture was developed, termed as Mobile Fundus Quality Network (MFQ-Net). This model has approximately 90% fewer parameters than those of the latest generation. For its evaluation, the Kaggle public data set was used with two sets of quality annotations, binary (good and bad) and three classes (good, usable and bad) obtaining an accuracy of 0.911 and 0.856 in the binary mode and three classes respectively in the classification of the fundus image quality. On the other hand, a method was developed for eye fundus quality enhancement termed as Pix2Pix Fundus Oculi Quality Enhancement (P2P-FOQE). This method is based on three stages which are; pre-enhancement: for color adjustment, enhancement: with a Pix2Pix network (which is a Conditional Generative Adversarial Network) as the core of the method and post-enhancement: which is a CLAHE adjustment for contrast and detail enhancement. This method was evaluated on a subset of quality annotations for the Kaggle public database which was re-classified for three categories (good, usable, and poor) by a specialist from the Fundación Oftalmolóica Nacional. With this method, the quality of these images for the good class was improved by 72.33%. Likewise, the image quality improved from the bad class to the usable class, and from the bad class to the good class by 56.21% and 29.49% respectively.La ingeniería busca diseñar, construir e implementar soluciones que permitan aumentar y/o mejorar la calidad de vida de los seres humanos. Igualmente, desde la medicina son generadas soluciones con los mismos fines, posibilitando que estas dos áreas del conocimiento convergan por un bien común. Con el trabajo de tesis “A Deep Learning Model to Assess and Enhance Eye Fundus Image Quality”, se propuso e implementó un modelo que permite evaluar y mejorar la calidad de las imágenes de fondo de ojo, lo cual contribuye a mejorar la eficiencia y eficacia de un posterior diagnóstico basado en estas imágenes. Para la evaluación de estás imágenes, se desarrolló un modelo basado en una arquitectura de red neuronal convolucional ligera, la cual fue llamada Mobile Fundus Quality Network (MFQ-Net). Este modelo posee aproximadamente 90% menos parámetros que aquellos de última generación. Para su evaluación se utilizó el conjunto de datos públicos de Kaggle con dos sets de anotaciones de calidad, binario (buena y mala) y tres clases (buena, usable y mala) obteniendo en la tareas de clasificación de la calidad de la imagen de fondo de ojo una exactitud de 0.911 y 0.856 en la modalidad binaria y tres clases respectivamente. Por otra parte, se desarrolló un método el cual realiza una mejora de la calidad de imágenes de fondo de ojo llamado Pix2Pix Fundus Oculi Quality Enhacement (P2P-FOQE). Este método está basado en tres etapas las cuales son; premejora: para ajuste de color, mejora: con una red Pix2Pix (la cual es una Conditional Generative Adversarial Network) como núcleo del método y postmejora: la cual es un ajuste CLAHE para contraste y realce de detalles. Este método fue evaluado en un subconjunto de anotaciones de calidad para la base de datos pública de Kaggle el cual fue re clasificado por un especialista de la Fundación Oftalmológica Nacional para tres categorías (buena, usable y mala). Con este método fue mejorada la calidad de estas imágenes para la clase buena en un 72,33%. Así mismo, la calidad de imagen mejoró de la clase mala a la clase utilizable, y de la clase mala a clase buena en 56.21% y 29.49% respectivamente.Línea de investigación: Visión por computadora para análisis de imágenes médicasMaestrí

    Penta-Modal Imaging Platform with OCT- Guided Dynamic Focusing for Simultaneous Multimodal Imaging

    Get PDF
    Complex diseases, such as Alzheimer’s disease, are associated with sequences of changes in multiple disease-specific biomarkers. These biomarkers may show dynamic changes at specific stages of disease progression. Thus, testing/monitoring each biomarker may provide insight into specific disease-related processes, which can result in early diagnosis or even development of preventive measures. Obtaining a comprehensive information of biological tissues requires imaging of multiple optical contrasts, which is not typically offered by a single imaging modality. Thus, combining different contrast mechanisms to achieve simultaneous multimodal imaging is desirable. However, this process is highly challenging due to specific optical and hardware requirements for each optical imaging system. The objective of this dissertation is to develop a novel Penta-modal optical imaging system integrating photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT), OCT angiography (OCTA) and confocal fluorescence microscopy (CFM) in one platform providing comprehensive structural, functional, and molecular information of living biological tissues. The system can simultaneously image different biomarkers with a large field-of-view (FOV) and high-speed imaging. The large FOV and the high imaging speed is achieved by combining optical and mechanical scanning mechanisms. To compensate for an uneven surface of biological samples, which result in images with non-uniform resolution and low signal to noise ratio (SNR), we further develop a novel OCT-guided surface contour scanning methodology, a technique for adjusting objective lens focus to follow the contour of the sample surface, to provide a uniform spatial resolution and SNR across the region of interest (ROI). The imaging system was tested by imaging phantoms, ex vivo biological samples, and in vivo. The OCT-guided surface contour scanning methodology was utilized for imaging a leaf of purple queen plant, which resulted in a significant contrast improvement of 41% and 38% across a large imaging area for CFM and PAM, respectively. The nuclei and cells walls were also clearly observed in both images. In an in vivo imaging of the Swiss Webster mouse ear, our multimodal imaging system was able to provide images with uniform resolution in an FOV of 10 mm x 10 mm with an imaging time of around 5 minutes. In addition to measuring the blood flow in the mouse ear, the system also successfully imaged mouse ear blood vessels, sebaceous glands, as well as several tissue structures. We further conducted a comparative study of OCTA for rodent retinal imaging by evaluating the performance of three OCTA algorithms, namely the phase variance (PV), improved speckle contrast (ISC), and optical microangiography (OMAG). It was concluded that the OMAG algorithm provided statistically significant higher mean values of BVD and VPI compared to the ISC algorithm (0.27±0.07 vs. 0.24±0.05 for BVD; 0.09±0.04 and 0.08±0.04 for VPI), while no statistically significant difference was observed for VDI and VCI among the algorithms. Results showed that both the ISC and OMAG algorithms are more robust than PV, and they can reveal similar vasculature features. Lastly, we utilized the proposed imaging system to monitor, for the first time, the invasion process of malaria parasites in the mosquito midgut. The system shows a promising potential to detect parasite motion as well as structural changes inside the mosquito midgut. The multimodal imaging system outlined in this dissertation can be useful in a variety of applications thanks to the specific optical contrast offered by each employed modality, including retinal and brain imaging

    Microfluidic characterization of cilia-driven fluid flow using optical coherence tomography-based particle tracking velocimetry

    Get PDF
    Motile cilia are cellular organelles that generate directional fluid flow across various epithelial surfaces including the embryonic node and respiratory mucosa. The proper functioning of cilia is necessary for normal embryo development and, for the respiratory system, the clearance of mucus and potentially harmful particulate matter. Here we show that optical coherence tomography (OCT) is well-suited for quantitatively characterizing the microfluidic-scale flow generated by motile cilia. Our imaging focuses on the ciliated epithelium of Xenopus tropicalis embryos, a genetically manipulable and experimentally tractable animal model of human disease. We show qualitative flow profile characterization using OCT-based particle pathline imaging. We show quantitative, two-dimensional, two-component flow velocity field characterization using OCT-based particle tracking velocimetry. Quantitative imaging and phenotyping of cilia-driven fluid flow using OCT will enable more detailed research in ciliary biology and in respiratory medicine

    Machine learning methods for the characterization and classification of complex data

    Get PDF
    This thesis work presents novel methods for the analysis and classification of medical images and, more generally, complex data. First, an unsupervised machine learning method is proposed to order anterior chamber OCT (Optical Coherence Tomography) images according to a patient's risk of developing angle-closure glaucoma. In a second study, two outlier finding techniques are proposed to improve the results of above mentioned machine learning algorithm, we also show that they are applicable to a wide variety of data, including fraud detection in credit card transactions. In a third study, the topology of the vascular network of the retina, considering it a complex tree-like network is analyzed and we show that structural differences reveal the presence of glaucoma and diabetic retinopathy. In a fourth study we use a model of a laser with optical injection that presents extreme events in its intensity time-series to evaluate machine learning methods to forecast such extreme events.El presente trabajo de tesis desarrolla nuevos métodos para el análisis y clasificación de imágenes médicas y datos complejos en general. Primero, proponemos un método de aprendizaje automático sin supervisión que ordena imágenes OCT (tomografía de coherencia óptica) de la cámara anterior del ojo en función del grado de riesgo del paciente de padecer glaucoma de ángulo cerrado. Luego, desarrollamos dos métodos de detección automática de anomalías que utilizamos para mejorar los resultados del algoritmo anterior, pero que su aplicabilidad va mucho más allá, siendo útil, incluso, para la detección automática de fraudes en transacciones de tarjetas de crédito. Mostramos también, cómo al analizar la topología de la red vascular de la retina considerándola una red compleja, podemos detectar la presencia de glaucoma y de retinopatía diabética a través de diferencias estructurales. Estudiamos también un modelo de un láser con inyección óptica que presenta eventos extremos en la serie temporal de intensidad para evaluar diferentes métodos de aprendizaje automático para predecir dichos eventos extremos.Aquesta tesi desenvolupa nous mètodes per a l’anàlisi i la classificació d’imatges mèdiques i dades complexes. Hem proposat, primer, un mètode d’aprenentatge automàtic sense supervisió que ordena imatges OCT (tomografia de coherència òptica) de la cambra anterior de l’ull en funció del grau de risc del pacient de patir glaucoma d’angle tancat. Després, hem desenvolupat dos mètodes de detecció automàtica d’anomalies que hem utilitzat per millorar els resultats de l’algoritme anterior, però que la seva aplicabilitat va molt més enllà, sent útil, fins i tot, per a la detecció automàtica de fraus en transaccions de targetes de crèdit. Mostrem també, com en analitzar la topologia de la xarxa vascular de la retina considerant-la una xarxa complexa, podem detectar la presència de glaucoma i de retinopatia diabètica a través de diferències estructurals. Finalment, hem estudiat un làser amb injecció òptica, el qual presenta esdeveniments extrems en la sèrie temporal d’intensitat. Hem avaluat diferents mètodes per tal de predir-los.Postprint (published version

    Proceedings of ICMMB2014

    Get PDF

    Snapshot Hyperspectral Imaging for Complete Fundus Oximetry

    Get PDF
    In this work, a snapshot hyperspectral imager capable of tuning its average spectral resolution from 22.7 nm to 13.9 nm in a single integrated form is presented. The principle of this system will enable future snapshot systems to dynamically adapt to a wide range of imaging situations. Additionally, the system overcomes datacube size limitations imposed by detector array size limits. The work done in this thesis also advances oximetry of the retina using data collected by the Image Mapping spectrometer (IMS), a snapshot spectrometer. Hyperspectral images of the retina are acquired, and oximetry of individual vessels in four diseased eyes is presented. Further, oximetry of the entire fundus is performed using a novel algorithm with data collected with the IMS. We present oxyhemoglobin concentration maps of the eye and demonstrate oxygen sensitivity of the maps by comparing normal and diseased eyes. The aim of this work is to advance the general capabilities of snapshot hyperspectral imagers and to advance the integration of retinal oximetry into the standard ophthalmology instrument repertoire
    corecore