618 research outputs found

    Reconstruction of 3D Surface Maps from Anterior Segment Optical Coherence Tomography Images Using Graph Theory and Genetic Algorithms

    Get PDF
    Automatic segmentation of anterior segment optical coherence tomography images provides an important tool to aid management of ocular diseases. Previous studies have mainly focused on 2D segmentation of these images. A novel technique capable of producing 3D maps of the anterior segment is presented here. This method uses graph theory and dynamic programming with shape constraint to segment the anterior and posterior surfaces in individual 2D images. Genetic algorithms are then used to align 2D images to produce a full 3D representation of the anterior segment. In order to validate the results of the 2D segmentation comparison is made to manual segmentation over a set of 39 images. For the 3D reconstruction a data set of 17 eyes is used. These have each been imaged twice so a repeatability measurement can be made. Good agreement was found with manual segmentation for the 2D segmentation method achieving a Dice similarity coefficient of 0.96, which is comparable to the inter-observer agreement. Good repeatability of results was demonstrated with the 3D registration method. A mean difference of 1.77 pixels was found between the anterior surfaces found from repeated scans of the same eye

    Automatic segmentation of anterior segment optical coherence tomography images

    Get PDF
    Automatic segmentation of anterior segment optical coherence tomography (AS OCT) images provides an important tool to aid management of ocular diseases. Having precise details about the topography and thickness of an individual eye enables treatments to be tailored to a specific problem. OCT is an imaging technique that can be used to acquire volumetric data of the anterior segment of the human eye. Fast automatic segmentation of this data, which is not available, means clinically useful information can be obtained without the need for time consuming error-prone manual analysis of the images. This thesis presents newly developed automatic segmentation techniques of OCT images. Segmentation of 2D OCT images is first performed. One of the main challenges segmenting 2D OCT images is the presence of regions of the image that generally have a low signal to noise ratio. This is overcome by the use of shape based terms. A number of different methods, such as level set, graph cut, and graph theory, are developed to do this. The segmentation techniques are validated by comparison to expert manual segmentation and previously published segmentation techniques. The best method, graph theory with shape, was able to achieve segmentation comparable to manual segmentation. Good agreement is found with manual segmentation for the best 2D segmentation method, graph theory with shape, achieving a Dice similarity coefficient of 0.96, which is comparable to inter-observer agreement. It performed significantly better than previously published techniques. The 2D segmentation techniques are then extended to 3D segmentation of OCT images. The challenge here is motion artefact or poor alignment between each 2D images comprising the 3D images. Different segmentation strategies are investigated including direct segmentation by level set or graph cut approaches, and segmentation with registration. In particular the latter requires the introduction of a registration step to align multiple 2D images to produce a 3D representation to overcome the presence of involuntary motion artefacts. This method produces the best performance. In particular, it uses graph theory and dynamic programming to segment the anterior and posterior surfaces in individual 2D images with shape constraint. Genetic algorithms are then used to align 2D images to produce a full 3D representation of the anterior segment based on landmarks or geometric constraints. For the 3D segmentation, a data set of 17 eyes is used for validation. These have each been imaged twice so a repeatability measurement can be made. Good repeatability of results is demonstrated with the 3D alignment method. A mean difference of 1.77 pixels is found between the same surfaces of the repeated scans of the same eye. Overall, a new automation method is developed that can produce maps of the anterior and posterior surfaces of the cornea from a 3D images of the anterior segment of a human eye. This will be a valuable tool that can be used for patient specific biomechanical modelling of the human eye

    Machine Learning Approaches for Automated Glaucoma Detection using Clinical Data and Optical Coherence Tomography Images

    Full text link
    Glaucoma is a multi-factorial, progressive blinding optic-neuropathy. A variety of factors, including genetics, vasculature, anatomy, and immune factors, are involved. Worldwide more than 80 million people are affected by glaucoma, and around 300,000 in Australia, where 50% remain undiagnosed. Untreated glaucoma can lead to blindness. Early detection by Artificial intelligence (AI) is crucial to accelerate the diagnosis process and can prevent further vision loss. Many proposed AI systems have shown promising performance for automated glaucoma detection using two-dimensional (2D) data. However, only a few studies had optimistic outcomes for glaucoma detection and staging. Moreover, the automated AI system still faces challenges in diagnosing at the clinicians’ level due to the lack of interpretability of the ML algorithms and integration of multiple clinical data. AI technology would be welcomed by doctors and patients if the "black box" notion is overcome by developing an explainable, transparent AI system with similar pathological markers used by clinicians as the sign of early detection and progression of glaucomatous damage. Therefore, the thesis aimed to develop a comprehensive AI model to detect and stage glaucoma by incorporating a variety of clinical data and utilising advanced data analysis and machine learning (ML) techniques. The research first focuses on optimising glaucoma diagnostic features by combining structural, functional, demographic, risk factor, and optical coherence tomography (OCT) features. The significant features were evaluated using statistical analysis and trained in ML algorithms to observe the detection performance. Three crucial structural ONH OCT features: cross-sectional 2D radial B-scan, 3D vascular angiography and temporal-superior-nasal-inferior-temporal (TSNIT) B-scan, were analysed and trained in explainable deep learning (DL) models for automated glaucoma prediction. The explanation behind the decision making of DL models were successfully demonstrated using the feature visualisation. The structural features or distinguished affected regions of TSNIT OCT scans were precisely localised for glaucoma patients. This is consistent with the concept of explainable DL, which refers to the idea of making the decision-making processes of DL models transparent and interpretable to humans. However, artifacts and speckle noise often result in misinterpretation of the TSNIT OCT scans. This research also developed an automated DL model to remove the artifacts and noise from the OCT scans, facilitating error-free retinal layers segmentation, accurate tissue thickness estimation and image interpretation. Moreover, to monitor and grade glaucoma severity, the visual field (VF) test is commonly followed by clinicians for treatment and management. Therefore, this research uses the functional features extracted from VF images to train ML algorithms for staging glaucoma from early to advanced/severe stages. Finally, the selected significant features were used to design and develop a comprehensive AI model to detect and grade glaucoma stages based on the data quantity and availability. In the first stage, a DL model was trained with TSNIT OCT scans, and its output was combined with significant structural and functional features and trained in ML models. The best-performed ML model achieved an area under the curve (AUC): 0.98, an accuracy of 97.2%, a sensitivity of 97.9%, and a specificity of 96.4% for detecting glaucoma. The model achieved an overall accuracy of 90.7% and an F1 score of 84.0% for classifying normal, early, moderate, and advanced-stage glaucoma. In conclusion, this thesis developed and proposed a comprehensive, evidence-based AI model that will solve the screening problem for large populations and relieve experts from manually analysing a slew of patient data and associated misinterpretation problems. Moreover, this thesis demonstrated three structural OCT features that could be added as excellent diagnostic markers for precise glaucoma diagnosis

    Measurement of the Intertablet Coating Uniformity of a Pharmaceutical Pan Coating Process With Combined Terahertz and Optical Coherence Tomography In-Line Sensing.

    Get PDF
    We present in-line coating thickness measurements acquired simultaneously using 2 independent sensing modalities: terahertz pulsed imaging (TPI) and optical coherence tomography (OCT). Both techniques are sufficiently fast to resolve the coating thickness of individual pharmaceutical tablets in situ during the film coating operation, and both techniques are direct structural imaging techniques that do not require multivariate calibration. The TPI sensor is suitable to measure coatings greater than 50 μm and can penetrate through thick coatings even in the presence of pigments over a wide range of excipients. Due to the long wavelength, terahertz radiation is not affected by scattering from dust within the coater. In contrast, OCT can resolve coating layers as thin as 20 μm and is capable of measuring the intratablet coating uniformity and the intertablet coating thickness distribution within the coating pan. However, the OCT technique is less robust when it comes to the compatibility with excipients, dust, and potentially the maximum coating thickness that can be resolved. Using a custom-built laboratory scale coating unit, the coating thickness measurements were acquired independently by the TPI and OCT sensors throughout a film coating operation. Results of the in-line TPI and OCT measurements were compared against one another and validated with off-line TPI and weight gain measurements. Compared with other process analytical technology sensors, such as near-infrared and Raman spectroscopy, the TPI and OCT sensors can resolve the intertablet thickness distribution based on sampling a significant fraction of the tablet populations in the process. By combining 2 complementary sensing modalities, it was possible to seamlessly monitor the coating process over the range of film thickness from 20 μm to greater than 250 μm.The authors would like to acknowledge the financial support from UK EPSRC Research Grant EP/L019787/1 and EP/L019922/1. The authors acknowledge BASF for providing the materials used in this study, Colorcon Ltd. (Dartford, UK) for coating process recommendations, Hüttlin GmbH (Bosch Packaging Technology, Schopfheim, Germany) for advice on the coating unit design and the staff of the electronics and mechanical workshops in Department of Chemical Engineering and Biotechnology at University of Cambridge. HL also acknowledges travel support from Joy Welch Educational Charitable Trust

    CAD system for early diagnosis of diabetic retinopathy based on 3D extracted imaging markers.

    Get PDF
    This dissertation makes significant contributions to the field of ophthalmology, addressing the segmentation of retinal layers and the diagnosis of diabetic retinopathy (DR). The first contribution is a novel 3D segmentation approach that leverages the patientspecific anatomy of retinal layers. This approach demonstrates superior accuracy in segmenting all retinal layers from a 3D retinal image compared to current state-of-the-art methods. It also offers enhanced speed, enabling potential clinical applications. The proposed segmentation approach holds great potential for supporting surgical planning and guidance in retinal procedures such as retinal detachment repair or macular hole closure. Surgeons can benefit from the accurate delineation of retinal layers, enabling better understanding of the anatomical structure and more effective surgical interventions. Moreover, real-time guidance systems can be developed to assist surgeons during procedures, improving overall patient outcomes. The second contribution of this dissertation is the introduction of a novel computeraided diagnosis (CAD) system for precise identification of diabetic retinopathy. The CAD system utilizes 3D-OCT imaging and employs an innovative approach that extracts two distinct features: first-order reflectivity and 3D thickness. These features are then fused and used to train and test a neural network classifier. The proposed CAD system exhibits promising results, surpassing other machine learning and deep learning algorithms commonly employed in DR detection. This demonstrates the effectiveness of the comprehensive analysis approach employed by the CAD system, which considers both low-level and high-level data from the 3D retinal layers. The CAD system presents a groundbreaking contribution to the field, as it goes beyond conventional methods, optimizing backpropagated neural networks to integrate multiple levels of information effectively. By achieving superior performance, the proposed CAD system showcases its potential in accurately diagnosing DR and aiding in the prevention of vision loss. In conclusion, this dissertation presents novel approaches for the segmentation of retinal layers and the diagnosis of diabetic retinopathy. The proposed methods exhibit significant improvements in accuracy, speed, and performance compared to existing techniques, opening new avenues for clinical applications and advancements in the field of ophthalmology. By addressing future research directions, such as testing on larger datasets, exploring alternative algorithms, and incorporating user feedback, the proposed methods can be further refined and developed into robust, accurate, and clinically valuable tools for diagnosing and monitoring retinal diseases

    Measurement of the Intertablet Coating Uniformity of a Pharmaceutical Pan Coating Process With Combined Terahertz and Optical Coherence Tomography In-Line Sensing.

    Get PDF
    We present in-line coating thickness measurements acquired simultaneously using two independent sensing modalities: terahertz pulsed imaging (TPI) and optical coherence tomography (OCT). Both techniques are sufficiently fast to resolve the coating thickness of individual pharmaceutical tablets in-situ during the film coating operation and both techniques are direct structural imaging techniques that do not require multivariate calibration. The TPI sensor is suitable to measure coatings greater than 50 μm and can penetrate through thick coatings even in the presence of pigments over a wide range of excipients. Due to the long wavelength, terahertz radiation is not affected by scattering from dust within the coater. In contrast, OCT can resolve coating layers as thin as 20 μm and is capable of measuring the intra-tablet coating uniformity as well as the inter-tablet coating thickness distribution within the coating pan. ¬-However, the OCT technique is less robust when it comes to the compatibility with excipients, dust and potentially the maximum coating thickness that can be resolved. Using a custom built laboratory scale coating unit, the coating thickness measurements were acquired independently by the TPI and OCT sensors throughout a film coating operation. Results of the in-line TPI and OCT measurements were compared against one another and validated with off-line TPI and weight gain measurements. Compared to other process analytical technology (PAT) sensors, such as near-infrared and Raman spectroscopy, the TPI/OCT sensors can resolve the inter-tablet thickness distribution based on sampling a significant fraction of the tablet populations in the process. By combining two complementary sensing modalities it was possible to seamlessly monitor the coating process over the range of film thickness from 20 μm to greater than 250 μm.The authors would like to acknowledge the financial support from UK EPSRC Research Grant EP/L019787/1 and EP/L019922/1. The authors acknowledge BASF for providing the materials used in this study, Colorcon Ltd. (Dartford, UK) for coating process recommendations, Hüttlin GmbH (Bosch Packaging Technology, Schopfheim, Germany) for advice on the coating unit design and the staff of the electronics and mechanical workshops in Department of Chemical Engineering and Biotechnology at University of Cambridge. HL also acknowledges travel support from Joy Welch Educational Charitable Trust
    • …
    corecore