108 research outputs found

    Outer retinal thickness and visibility of the choriocapillaris in four distinct retinal regions imaged with spectral domain optical coherence tomography in dogs and cats

    Full text link
    Purpose: To evaluate the outer retinal band thickness and choriocapillaris (CC) visibility in four distinct retinal regions in dogs and cats imaged with spectral domain optical coherence tomography (SD-OCT). To attempt delineation of a fovea-like region in canine and feline SD-OCT scans, aided by the identification of outer retinal thickness differences between retinal regions. Methods: SpectralisÂŽ HRA + OCT SD-OCT scans from healthy, anesthetized dogs (n = 10) and cats (n = 12) were analyzed. Scanlines on which the CC was identifiable were counted and CC visibility was scored. Outer nuclear layer (ONL) thickness and the distances from external limiting membrane (ELM) to retinal pigment epithelium/Bruch's membrane complex (RPE/BM) and ELM to CC were measured in the area centralis (AC), a visually identified fovea-like region, and in regions superior and inferior to the optic nerve head (ONH). Measurements were analyzed using a multilevel regression. Results: The CC was visible in over 90% of scanlines from dogs and cats. The ONL was consistently thinnest in the fovea-like region. The outer retina (ELM-RPE and ELM-CC) was thickest within the AC compared with superior and inferior to the ONH in dogs and cats (p < .001 for all comparisons). Conclusions: The CC appears a valid, albeit less than ideal outer retinal boundary marker in tapetal species. The AC can be objectively differentiated from the surrounding retina on SD-OCT images of dogs and cats; a fovea-like region was identified in dogs and its presence was suggested in cats. These findings allow targeted imaging and image evaluation of these regions of retinal specialization

    Vet Ophthalmol

    Get PDF
    Purpose:To evaluate outer retinal band thickness and choriocapillaris (CC) visibility in four distinct retinal regions in dogs and cats imaged with spectral domain Optical Coherence Tomography (SD-OCT). To attempt delineation of a fovea-like region in canine and feline SD-OCT scans, aided by the identification of outer retinal thickness differences between retinal regions.Methods:Spectralis\uae HRA+OCT SD-OCT scans from healthy, anaesthetized dogs (n=10) and cats (n=12) were analyzed. Scanlines on which the CC was identifiable were counted and CC visibility was scored. Outer nuclear layer (ONL) thickness and the distances from external limiting membrane (ELM) to retinal pigment epithelium/Bruch\u2019s membrane complex (RPE/BM) and ELM to CC were measured in the area centralis (AC), a visually identified fovea-like region, and in regions superior and inferior to the optic nerve head (ONH). Measurements were analyzed using a multilevel regression.Results:The CC was visible in over 90% of scanlines from dogs and cats. The ONL was consistently thinnest in the fovea-like region. The outer retina (ELM-RPE and ELM-CC) was thickest within the AC compared to superior and inferior to the ONH in dogs and cats (p <0.001 for all comparisons).Conclusions:The CC appears a valid, albeit less than ideal outer retinal boundary marker in tapetal species. The AC can be objectively differentiated from the surrounding retina on SD-OCT images of dogs and cats; a fovea-like region was identified in dogs and its presence was suggested in cats. These findings allow targeted imaging and image evaluation of these regions of retinal specialization.P30 EY016665/EY/NEI NIH HHSUnited States/R01 EY027396/EY/NEI NIH HHSUnited States/R01EY025752/National Institutes of Health (NIH)/S10 OD018221/CD/ODCDC CDC HHSUnited States/WOA Institution: Universitat Zurich

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    Data efficient deep learning for medical image analysis: A survey

    Full text link
    The rapid evolution of deep learning has significantly advanced the field of medical image analysis. However, despite these achievements, the further enhancement of deep learning models for medical image analysis faces a significant challenge due to the scarcity of large, well-annotated datasets. To address this issue, recent years have witnessed a growing emphasis on the development of data-efficient deep learning methods. This paper conducts a thorough review of data-efficient deep learning methods for medical image analysis. To this end, we categorize these methods based on the level of supervision they rely on, encompassing categories such as no supervision, inexact supervision, incomplete supervision, inaccurate supervision, and only limited supervision. We further divide these categories into finer subcategories. For example, we categorize inexact supervision into multiple instance learning and learning with weak annotations. Similarly, we categorize incomplete supervision into semi-supervised learning, active learning, and domain-adaptive learning and so on. Furthermore, we systematically summarize commonly used datasets for data efficient deep learning in medical image analysis and investigate future research directions to conclude this survey.Comment: Under Revie

    Machine Learning Approaches for Automated Glaucoma Detection using Clinical Data and Optical Coherence Tomography Images

    Full text link
    Glaucoma is a multi-factorial, progressive blinding optic-neuropathy. A variety of factors, including genetics, vasculature, anatomy, and immune factors, are involved. Worldwide more than 80 million people are affected by glaucoma, and around 300,000 in Australia, where 50% remain undiagnosed. Untreated glaucoma can lead to blindness. Early detection by Artificial intelligence (AI) is crucial to accelerate the diagnosis process and can prevent further vision loss. Many proposed AI systems have shown promising performance for automated glaucoma detection using two-dimensional (2D) data. However, only a few studies had optimistic outcomes for glaucoma detection and staging. Moreover, the automated AI system still faces challenges in diagnosing at the clinicians’ level due to the lack of interpretability of the ML algorithms and integration of multiple clinical data. AI technology would be welcomed by doctors and patients if the "black box" notion is overcome by developing an explainable, transparent AI system with similar pathological markers used by clinicians as the sign of early detection and progression of glaucomatous damage. Therefore, the thesis aimed to develop a comprehensive AI model to detect and stage glaucoma by incorporating a variety of clinical data and utilising advanced data analysis and machine learning (ML) techniques. The research first focuses on optimising glaucoma diagnostic features by combining structural, functional, demographic, risk factor, and optical coherence tomography (OCT) features. The significant features were evaluated using statistical analysis and trained in ML algorithms to observe the detection performance. Three crucial structural ONH OCT features: cross-sectional 2D radial B-scan, 3D vascular angiography and temporal-superior-nasal-inferior-temporal (TSNIT) B-scan, were analysed and trained in explainable deep learning (DL) models for automated glaucoma prediction. The explanation behind the decision making of DL models were successfully demonstrated using the feature visualisation. The structural features or distinguished affected regions of TSNIT OCT scans were precisely localised for glaucoma patients. This is consistent with the concept of explainable DL, which refers to the idea of making the decision-making processes of DL models transparent and interpretable to humans. However, artifacts and speckle noise often result in misinterpretation of the TSNIT OCT scans. This research also developed an automated DL model to remove the artifacts and noise from the OCT scans, facilitating error-free retinal layers segmentation, accurate tissue thickness estimation and image interpretation. Moreover, to monitor and grade glaucoma severity, the visual field (VF) test is commonly followed by clinicians for treatment and management. Therefore, this research uses the functional features extracted from VF images to train ML algorithms for staging glaucoma from early to advanced/severe stages. Finally, the selected significant features were used to design and develop a comprehensive AI model to detect and grade glaucoma stages based on the data quantity and availability. In the first stage, a DL model was trained with TSNIT OCT scans, and its output was combined with significant structural and functional features and trained in ML models. The best-performed ML model achieved an area under the curve (AUC): 0.98, an accuracy of 97.2%, a sensitivity of 97.9%, and a specificity of 96.4% for detecting glaucoma. The model achieved an overall accuracy of 90.7% and an F1 score of 84.0% for classifying normal, early, moderate, and advanced-stage glaucoma. In conclusion, this thesis developed and proposed a comprehensive, evidence-based AI model that will solve the screening problem for large populations and relieve experts from manually analysing a slew of patient data and associated misinterpretation problems. Moreover, this thesis demonstrated three structural OCT features that could be added as excellent diagnostic markers for precise glaucoma diagnosis

    Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework

    Get PDF
    To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying

    Arteriogenesis – Molecular Regulation, Pathophysiology and Therapeutics I

    Get PDF
    • …
    corecore