108 research outputs found
Outer retinal thickness and visibility of the choriocapillaris in four distinct retinal regions imaged with spectral domain optical coherence tomography in dogs and cats
Purpose: To evaluate the outer retinal band thickness and choriocapillaris (CC) visibility in four distinct retinal regions in dogs and cats imaged with spectral domain optical coherence tomography (SD-OCT). To attempt delineation of a fovea-like region in canine and feline SD-OCT scans, aided by the identification of outer retinal thickness differences between retinal regions.
Methods: SpectralisÂŽ HRA + OCT SD-OCT scans from healthy, anesthetized dogs (n = 10) and cats (n = 12) were analyzed. Scanlines on which the CC was identifiable were counted and CC visibility was scored. Outer nuclear layer (ONL) thickness and the distances from external limiting membrane (ELM) to retinal pigment epithelium/Bruch's membrane complex (RPE/BM) and ELM to CC were measured in the area centralis (AC), a visually identified fovea-like region, and in regions superior and inferior to the optic nerve head (ONH). Measurements were analyzed using a multilevel regression.
Results: The CC was visible in over 90% of scanlines from dogs and cats. The ONL was consistently thinnest in the fovea-like region. The outer retina (ELM-RPE and ELM-CC) was thickest within the AC compared with superior and inferior to the ONH in dogs and cats (p < .001 for all comparisons).
Conclusions: The CC appears a valid, albeit less than ideal outer retinal boundary marker in tapetal species. The AC can be objectively differentiated from the surrounding retina on SD-OCT images of dogs and cats; a fovea-like region was identified in dogs and its presence was suggested in cats. These findings allow targeted imaging and image evaluation of these regions of retinal specialization
Vet Ophthalmol
Purpose:To evaluate outer retinal band thickness and choriocapillaris (CC) visibility in four distinct retinal regions in dogs and cats imaged with spectral domain Optical Coherence Tomography (SD-OCT). To attempt delineation of a fovea-like region in canine and feline SD-OCT scans, aided by the identification of outer retinal thickness differences between retinal regions.Methods:Spectralis\uae HRA+OCT SD-OCT scans from healthy, anaesthetized dogs (n=10) and cats (n=12) were analyzed. Scanlines on which the CC was identifiable were counted and CC visibility was scored. Outer nuclear layer (ONL) thickness and the distances from external limiting membrane (ELM) to retinal pigment epithelium/Bruch\u2019s membrane complex (RPE/BM) and ELM to CC were measured in the area centralis (AC), a visually identified fovea-like region, and in regions superior and inferior to the optic nerve head (ONH). Measurements were analyzed using a multilevel regression.Results:The CC was visible in over 90% of scanlines from dogs and cats. The ONL was consistently thinnest in the fovea-like region. The outer retina (ELM-RPE and ELM-CC) was thickest within the AC compared to superior and inferior to the ONH in dogs and cats (p <0.001 for all comparisons).Conclusions:The CC appears a valid, albeit less than ideal outer retinal boundary marker in tapetal species. The AC can be objectively differentiated from the surrounding retina on SD-OCT images of dogs and cats; a fovea-like region was identified in dogs and its presence was suggested in cats. These findings allow targeted imaging and image evaluation of these regions of retinal specialization.P30 EY016665/EY/NEI NIH HHSUnited States/R01 EY027396/EY/NEI NIH HHSUnited States/R01EY025752/National Institutes of Health (NIH)/S10 OD018221/CD/ODCDC CDC HHSUnited States/WOA Institution: Universitat Zurich
Optical Methods in Sensing and Imaging for Medical and Biological Applications
The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled âOptical Methods in Sensing and Imaging for Medical and Biological Applicationsâ, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject
Recommended from our members
Image analytic tools for tissue characterization using optical coherence tomography
Optical coherence tomography (OCT) has been emerging as a promising imaging technique, with a strong capability of non-invasive, in vivo, high resolution, depth-resolved imaging. There is a great potential to use OCT to guide the treatment of arrhythmias, to prevent preterm birth, and to detect breast cancer. To facilitate the clinical applications, this thesis presents three image analytic tools to characterize biological tissue: 1) automated fiber direction analysis; 2) automated volumetric stitching; 3) automated tissue classification. The fiber direction analysis consists of a particle-filter-based 3D tractography scheme and a pixel-wise fiber analysis scheme. The stitching algorithm enlarges the field of view of current OCT system from millimeter to centimeter level by volumetric stitching using scale-invariant feature transform. Based on relevance vector machine, a region-based classification scheme and a grid-based classification scheme are developed to automatically identify tissue composition in human cardiac tissue and human breast tissue. These tools are collaboratively used to study OCT images from cardiac, cervical, and breast tissue.
In cardiac tissue, we apply the fiber orientation analysis to reconstruct 3D cardiac myofibers tractography and perform pixel-wise fiber analysis on the collagen region within human heart. In addition, we apply the region-based algorithm to segment and classify tissue compositions, such as collagen, adipose tissue, fibrotic myocardium, and normal myocardium, over a single or a stitched OCT volume. Using our algorithm, we observe fiber directionality change over depths and find that the fiber orientation changes more dramatically in atria than in ventricle. We also observe different dispersion patterns within collagen layer.
In cervical tissue, our stitching algorithm enables a paramount 3D view of entire axial slices. Together with pixel-wise fiber orientation scheme, we analyze the difference of dispersion property within inner/outer regions of four quadrants. We observe two dispersion patterns in pregnant and non-pregnant cervical tissue at the location close to upper cervix. In addition, we discover that an increasing trend of dispersion and an increasing trend of penetration depth from internal orifice (os) to external os.
In breast tissue, we visualize various features in both benign and malignant tissues such as invasive ductal carcinoma (IDC), ductal carcinoma in situ, cyst, and terminal duct lobule unit in stitched OCT images. Focusing on the automated detection of IDC, we propose a hierarchy framework of classification model and apply our classifier in two OCT systems and achieve both reasonable sensitivity and specificity in identifying cancerous region
Data efficient deep learning for medical image analysis: A survey
The rapid evolution of deep learning has significantly advanced the field of
medical image analysis. However, despite these achievements, the further
enhancement of deep learning models for medical image analysis faces a
significant challenge due to the scarcity of large, well-annotated datasets. To
address this issue, recent years have witnessed a growing emphasis on the
development of data-efficient deep learning methods. This paper conducts a
thorough review of data-efficient deep learning methods for medical image
analysis. To this end, we categorize these methods based on the level of
supervision they rely on, encompassing categories such as no supervision,
inexact supervision, incomplete supervision, inaccurate supervision, and only
limited supervision. We further divide these categories into finer
subcategories. For example, we categorize inexact supervision into multiple
instance learning and learning with weak annotations. Similarly, we categorize
incomplete supervision into semi-supervised learning, active learning, and
domain-adaptive learning and so on. Furthermore, we systematically summarize
commonly used datasets for data efficient deep learning in medical image
analysis and investigate future research directions to conclude this survey.Comment: Under Revie
Machine Learning Approaches for Automated Glaucoma Detection using Clinical Data and Optical Coherence Tomography Images
Glaucoma is a multi-factorial, progressive blinding optic-neuropathy. A variety of factors, including genetics, vasculature, anatomy, and immune factors, are involved. Worldwide more than 80 million people are affected by glaucoma, and around 300,000 in Australia, where 50% remain undiagnosed. Untreated glaucoma can lead to blindness. Early detection by Artificial intelligence (AI) is crucial to accelerate the diagnosis process and can prevent further vision loss. Many proposed AI systems have shown promising performance for automated glaucoma detection using two-dimensional (2D) data. However, only a few studies had optimistic outcomes for glaucoma detection and staging. Moreover, the automated AI system still faces challenges in diagnosing at the cliniciansâ level due to the lack of interpretability of the ML algorithms and integration of multiple clinical data. AI technology would be welcomed by doctors and patients if the "black box" notion is overcome by developing an explainable, transparent AI system with similar pathological markers used by clinicians as the sign of early detection and progression of glaucomatous damage.
Therefore, the thesis aimed to develop a comprehensive AI model to detect and stage glaucoma by incorporating a variety of clinical data and utilising advanced data analysis and machine learning (ML) techniques.
The research first focuses on optimising glaucoma diagnostic features by combining structural, functional, demographic, risk factor, and optical coherence tomography (OCT) features. The significant features were evaluated using statistical analysis and trained in ML algorithms to observe the detection performance. Three crucial structural ONH OCT features: cross-sectional 2D radial B-scan, 3D vascular angiography and temporal-superior-nasal-inferior-temporal (TSNIT) B-scan, were analysed and trained in explainable deep learning (DL) models for automated glaucoma prediction. The explanation behind the decision making of DL models were successfully demonstrated using the feature visualisation. The structural features or distinguished affected regions of TSNIT OCT scans were precisely localised for glaucoma patients. This is consistent with the concept of explainable DL, which refers to the idea of making the decision-making processes of DL models transparent and interpretable to humans. However, artifacts and speckle noise often result in misinterpretation of the TSNIT OCT scans. This research also developed an automated DL model to remove the artifacts and noise from the OCT scans, facilitating error-free retinal layers segmentation, accurate tissue thickness estimation and image interpretation.
Moreover, to monitor and grade glaucoma severity, the visual field (VF) test is commonly followed by clinicians for treatment and management. Therefore, this research uses the functional features extracted from VF images to train ML algorithms for staging glaucoma from early to advanced/severe stages.
Finally, the selected significant features were used to design and develop a comprehensive AI model to detect and grade glaucoma stages based on the data quantity and availability. In the first stage, a DL model was trained with TSNIT OCT scans, and its output was combined with significant structural and functional features and trained in ML models. The best-performed ML model achieved an area under the curve (AUC): 0.98, an accuracy of 97.2%, a sensitivity of 97.9%, and a specificity of 96.4% for detecting glaucoma. The model achieved an overall accuracy of 90.7% and an F1 score of 84.0% for classifying normal, early, moderate, and advanced-stage glaucoma.
In conclusion, this thesis developed and proposed a comprehensive, evidence-based AI model that will solve the screening problem for large populations and relieve experts from manually analysing a slew of patient data and associated misinterpretation problems. Moreover, this thesis demonstrated three structural OCT features that could be added as excellent diagnostic markers for precise glaucoma diagnosis
Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework
To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying
- âŚ