2,209 research outputs found

    Microscopic Inner Retinal Hyper-reflective Phenotypes in Retinal and Neurologic Disease

    Get PDF
    Purpose. We surveyed inner retinal microscopic features in retinal and neurologic disease using a reflectance confocal adaptive optics scanning light ophthalmoscope (AOSLO). Methods. Inner retinal images from 101 subjects affected by one of 38 retinal or neurologic conditions and 11 subjects with no known eye disease were examined for the presence of hyper-reflective features other than vasculature, retinal nerve fiber layer, and foveal pit reflex. The hyper-reflective features in the AOSLO images were grouped based on size, location, and subjective texture. Clinical imaging, including optical coherence tomography (OCT), scanning laser ophthalmoscopy, and fundus photography was analyzed for comparison. Results. Seven categories of hyper-reflective inner retinal structures were identified, namely punctate reflectivity, nummular (disc-shaped) reflectivity, granular membrane, waxy membrane, vessel-associated membrane, microcysts, and striate reflectivity. Punctate and nummular reflectivity also was found commonly in normal volunteers, but the features in the remaining five categories were found only in subjects with retinal or neurologic disease. Some of the features were found to change substantially between follow up imaging months apart. Conclusions. Confocal reflectance AOSLO imaging revealed a diverse spectrum of normal and pathologic hyper-reflective inner and epiretinal features, some of which were previously unreported. Notably, these features were not disease-specific, suggesting that they might correspond to common mechanisms of degeneration or repair in pathologic states. Although prospective studies with larger and better characterized populations, along with imaging of more extensive retinal areas are needed, the hyper-reflective structures reported here could be used as disease biomarkers, provided their specificity is studied further

    Multimodal Imaging of Photoreceptor Structure in Choroideremia

    Get PDF
    Purpose Choroideremia is a progressive X-linked recessive dystrophy, characterized by degeneration of the retinal pigment epithelium (RPE), choroid, choriocapillaris, and photoreceptors. We examined photoreceptor structure in a series of subjects with choroideremia with particular attention to areas bordering atrophic lesions. Methods Twelve males with clinically-diagnosed choroideremia and confirmed hemizygous mutations in the CHM gene were examined. High-resolution images of the retina were obtained using spectral domain optical coherence tomography (SD-OCT) and both confocal and non-confocal split-detector adaptive optics scanning light ophthalmoscope (AOSLO) techniques. Results Eleven CHM gene mutations (3 novel) were identified; three subjects had the same mutation and one subject had two mutations. SD-OCT findings included interdigitation zone (IZ) attenuation or loss in 10/12 subjects, often in areas with intact ellipsoid zones; RPE thinning in all subjects; interlaminar bridges in the imaged areas of 10/12 subjects; and outer retinal tubulations (ORTs) in 10/12 subjects. Only split-detector AOSLO could reliably resolve cones near lesion borders, and such cones were abnormally heterogeneous in morphology, diameter and density. On split-detector imaging, the cone mosaic terminated sharply at lesion borders in 5/5 cases examined. Split-detector imaging detected remnant cone inner segments within ORTs, which were generally contiguous with a central patch of preserved retina. Conclusions Early IZ dropout and RPE thinning on SD-OCT are consistent with previously published results. Evidence of remnant cone inner segments within ORTs and the continuity of the ORTs with preserved retina suggests that these may represent an intermediate state of retinal degeneration prior to complete atrophy. Taken together, these results supports a model of choroideremia in which the RPE degenerates before photoreceptors

    Assessing Photoreceptor Structure Associated with Ellipsoid Zone Disruptions Visualized with Optical Coherence Tomography

    Get PDF
    Purpose: To compare images of photoreceptor layer disruptions obtained with optical coherence tomography (OCT) and adaptive optics scanning light ophthalmoscopy (AOSLO) in a variety of pathologic states.Methods: Five subjects with photoreceptor ellipsoid zone disruption as per OCT and clinical diagnoses of closed-globe blunt ocular trauma (n = 2), macular telangiectasia type 2 (n = 1), blue-cone monochromacy (n = 1), or cone-rod dystrophy (n = 1) were included. Images were acquired within and around photoreceptor lesions using spectral domain OCT, confocal AOSLO, and split-detector AOSLO.Results: There were substantial differences in the extent and appearance of the photoreceptor mosaic as revealed by confocal AOSLO, split-detector AOSLO, and spectral domain OCT en face view of the ellipsoid zone.Conclusion: Clinically available spectral domain OCT, viewed en face or as B-scan, may lead to misinterpretation of photoreceptor anatomy in a variety of diseases and injuries. This was demonstrated using split-detector AOSLO to reveal substantial populations of photoreceptors in areas of no, low, or ambiguous ellipsoid zone reflectivity with en face OCT and confocal AOSLO. Although it is unclear if these photoreceptors are functional, their presence offers hope for therapeutic strategies aimed at preserving or restoring photoreceptor function

    Deep Learning Techniques for Automated Analysis and Processing of High Resolution Medical Imaging

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] Medical imaging plays a prominent role in modern clinical practice for numerous medical specialties. For instance, in ophthalmology, different imaging techniques are commonly used to visualize and study the eye fundus. In this context, automated image analysis methods are key towards facilitating the early diagnosis and adequate treatment of several diseases. Nowadays, deep learning algorithms have already demonstrated a remarkable performance for different image analysis tasks. However, these approaches typically require large amounts of annotated data for the training of deep neural networks. This complicates the adoption of deep learning approaches, especially in areas where large scale annotated datasets are harder to obtain, such as in medical imaging. This thesis aims to explore novel approaches for the automated analysis of medical images, particularly in ophthalmology. In this regard, the main focus is on the development of novel deep learning-based approaches that do not require large amounts of annotated training data and can be applied to high resolution images. For that purpose, we have presented a novel paradigm that allows to take advantage of unlabeled complementary image modalities for the training of deep neural networks. Additionally, we have also developed novel approaches for the detailed analysis of eye fundus images. In that regard, this thesis explores the analysis of relevant retinal structures as well as the diagnosis of different retinal diseases. In general, the developed algorithms provide satisfactory results for the analysis of the eye fundus, even when limited annotated training data is available.[Resumen] Las técnicas de imagen tienen un papel destacado en la práctica clínica moderna de numerosas especialidades médicas. Por ejemplo, en oftalmología es común el uso de diferentes técnicas de imagen para visualizar y estudiar el fondo de ojo. En este contexto, los métodos automáticos de análisis de imagen son clave para facilitar el diagnóstico precoz y el tratamiento adecuado de diversas enfermedades. En la actualidad, los algoritmos de aprendizaje profundo ya han demostrado un notable rendimiento en diferentes tareas de análisis de imagen. Sin embargo, estos métodos suelen necesitar grandes cantidades de datos etiquetados para el entrenamiento de las redes neuronales profundas. Esto complica la adopción de los métodos de aprendizaje profundo, especialmente en áreas donde los conjuntos masivos de datos etiquetados son más difíciles de obtener, como es el caso de la imagen médica. Esta tesis tiene como objetivo explorar nuevos métodos para el análisis automático de imagen médica, concretamente en oftalmología. En este sentido, el foco principal es el desarrollo de nuevos métodos basados en aprendizaje profundo que no requieran grandes cantidades de datos etiquetados para el entrenamiento y puedan aplicarse a imágenes de alta resolución. Para ello, hemos presentado un nuevo paradigma que permite aprovechar modalidades de imagen complementarias no etiquetadas para el entrenamiento de redes neuronales profundas. Además, también hemos desarrollado nuevos métodos para el análisis en detalle de las imágenes del fondo de ojo. En este sentido, esta tesis explora el análisis de estructuras retinianas relevantes, así como el diagnóstico de diferentes enfermedades de la retina. En general, los algoritmos desarrollados proporcionan resultados satisfactorios para el análisis de las imágenes de fondo de ojo, incluso cuando la disponibilidad de datos de entrenamiento etiquetados es limitada.[Resumo] As técnicas de imaxe teñen un papel destacado na práctica clínica moderna de numerosas especialidades médicas. Por exemplo, en oftalmoloxía é común o uso de diferentes técnicas de imaxe para visualizar e estudar o fondo de ollo. Neste contexto, os métodos automáticos de análises de imaxe son clave para facilitar o diagn ostico precoz e o tratamento adecuado de diversas enfermidades. Na actualidade, os algoritmos de aprendizaxe profunda xa demostraron un notable rendemento en diferentes tarefas de análises de imaxe. Con todo, estes métodos adoitan necesitar grandes cantidades de datos etiquetos para o adestramento das redes neuronais profundas. Isto complica a adopción dos métodos de aprendizaxe profunda, especialmente en áreas onde os conxuntos masivos de datos etiquetados son máis difíciles de obter, como é o caso da imaxe médica. Esta tese ten como obxectivo explorar novos métodos para a análise automática de imaxe médica, concretamente en oftalmoloxía. Neste sentido, o foco principal é o desenvolvemento de novos métodos baseados en aprendizaxe profunda que non requiran grandes cantidades de datos etiquetados para o adestramento e poidan aplicarse a imaxes de alta resolución. Para iso, presentamos un novo paradigma que permite aproveitar modalidades de imaxe complementarias non etiquetadas para o adestramento de redes neuronais profundas. Ademais, tamén desenvolvemos novos métodos para a análise en detalle das imaxes do fondo de ollo. Neste sentido, esta tese explora a análise de estruturas retinianas relevantes, así como o diagnóstico de diferentes enfermidades da retina. En xeral, os algoritmos desenvolvidos proporcionan resultados satisfactorios para a análise das imaxes de fondo de ollo, mesmo cando a dispoñibilidade de datos de adestramento etiquetados é limitada

    MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences

    Full text link
    The measurement of retinal blood flow (RBF) in capillaries can provide a powerful biomarker for the early diagnosis and treatment of ocular diseases. However, no single modality can determine capillary flowrates with high precision. Combining erythrocyte-mediated angiography (EMA) with optical coherence tomography angiography (OCTA) has the potential to achieve this goal, as EMA can measure the absolute 2D RBF of retinal microvasculature and OCTA can provide the 3D structural images of capillaries. However, multimodal retinal image registration between these two modalities remains largely unexplored. To fill this gap, we establish MEMO, the first public multimodal EMA and OCTA retinal image dataset. A unique challenge in multimodal retinal image registration between these modalities is the relatively large difference in vessel density (VD). To address this challenge, we propose a segmentation-based deep-learning framework (VDD-Reg) and a new evaluation metric (MSD), which provide robust results despite differences in vessel density. VDD-Reg consists of a vessel segmentation module and a registration module. To train the vessel segmentation module, we further designed a two-stage semi-supervised learning framework (LVD-Seg) combining supervised and unsupervised losses. We demonstrate that VDD-Reg outperforms baseline methods quantitatively and qualitatively for cases of both small VD differences (using the CF-FA dataset) and large VD differences (using our MEMO dataset). Moreover, VDD-Reg requires as few as three annotated vessel segmentation masks to maintain its accuracy, demonstrating its feasibility.Comment: Submitted to IEEE JBH

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Color Fundus Image Registration Using a Learning-Based Domain-Specific Landmark Detection Methodology

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00 136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095 894-B-I00 research project; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the predoctoral grant contract ref. ED481A 2021/147 and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). The funding institutions had no involvement in the study design, in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A 2021/147Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0
    corecore