969 research outputs found

    Objective localisation of oral mucosal lesions using optical coherence tomography.

    Get PDF
    PhDIdentification of the most representative location for biopsy is critical in establishing the definitive diagnosis of oral mucosal lesions. Currently, this process involves visual evaluation of the colour characteristics of tissue aided by topical application of contrast enhancing agents. Although, this approach is widely practiced, it remains limited by its lack of objectivity in identifying and delineating suspicious areas for biopsy. To overcome this drawback there is a need to introduce a technique that would provide macroscopic guidance based on microscopic imaging and analysis. Optical Coherence Tomography is an emerging high resolution biomedical imaging modality that can potentially be used as an in vivo tool for selection of the most appropriate site for biopsy. This thesis investigates the use of OCT for qualitative and quantitative mapping of oral mucosal lesions. Feasibility studies were performed on patient biopsy samples prior to histopathological processing using a commercial OCT microscope. Qualitative imaging results examining a variety of normal, benign, inflammatory and premalignant lesions of the oral mucosa will be presented. Furthermore, the identification and utilisation of a common quantifiable parameter in OCT and histology of images of normal and dysplastic oral epithelium will be explored thus ensuring objective and reproducible mapping of the progression of oral carcinogenesis. Finally, the selection of the most representative biopsy site of oral epithelial dysplasia would be investigated using a novel approach, scattering attenuation microscopy. It is hoped this approach may help convey more clinical meaning than the conventional visualisation of OCT images

    Improving statistical power of glaucoma clinical trials using an ensemble of cyclical generative adversarial networks

    Get PDF
    Albeit spectral-domain OCT (SDOCT) is now in clinical use for glaucoma management, published clinical trials relied on time-domain OCT (TDOCT) which is characterized by low signal-to-noise ratio, leading to low statistical power. For this reason, such trials require large numbers of patients observed over long intervals and become more costly. We propose a probabilistic ensemble model and a cycle-consistent perceptual loss for improving the statistical power of trials utilizing TDOCT. TDOCT are converted to synthesized SDOCT and segmented via Bayesian fusion of an ensemble of GANs. The final retinal nerve fibre layer segmentation is obtained automatically on an averaged synthesized image using label fusion. We benchmark different networks using i) GAN, ii) Wasserstein GAN (WGAN) (iii) GAN + perceptual loss and iv) WGAN + perceptual loss. For training and validation, an independent dataset is used, while testing is performed on the UK Glaucoma Treatment Study (UKGTS), i.e. a TDOCT-based trial. We quantify the statistical power of the measurements obtained with our method, as compared with those derived from the original TDOCT. The results provide new insights into the UKGTS, showing a significantly better separation between treatment arms, while improving the statistical power of TDOCT on par with visual field measurements

    Deep learning-based improvement for the outcomes of glaucoma clinical trials

    Get PDF
    Glaucoma is the leading cause of irreversible blindness worldwide. It is a progressive optic neuropathy in which retinal ganglion cell (RGC) axon loss, probably as a consequence of damage at the optic disc, causes a loss of vision, predominantly affecting the mid-peripheral visual field (VF). Glaucoma results in a decrease in vision-related quality of life and, therefore, early detection and evaluation of disease progression rates is crucial in order to assess the risk of functional impairment and to establish sound treatment strategies. The aim of my research is to improve glaucoma diagnosis by enhancing state of the art analyses of glaucoma clinical trial outcomes using advanced analytical methods. This knowledge would also help better design and analyse clinical trials, providing evidence for re-evaluating existing medications, facilitating diagnosis and suggesting novel disease management. To facilitate my objective methodology, this thesis provides the following contributions: (i) I developed deep learning-based super-resolution (SR) techniques for optical coherence tomography (OCT) image enhancement and demonstrated that using super-resolved images improves the statistical power of clinical trials, (ii) I developed a deep learning algorithm for segmentation of retinal OCT images, showing that the methodology consistently produces more accurate segmentations than state-of-the-art networks, (iii) I developed a deep learning framework for refining the relationship between structural and functional measurements and demonstrated that the mapping is significantly improved over previous techniques, iv) I developed a probabilistic method and demonstrated that glaucomatous disc haemorrhages are influenced by a possible systemic factor that makes both eyes bleed simultaneously. v) I recalculated VF slopes, using the retinal never fiber layer thickness (RNFLT) from the super-resolved OCT as a Bayesian prior and demonstrated that use of VF rates with the Bayesian prior as the outcome measure leads to a reduction in the sample size required to distinguish treatment arms in a clinical trial

    Advancing combined radiological and optical scanning for breast-conserving surgery margin guidance

    Get PDF
    Breast cancer is one of the most common types of cancer worldwide, and standard-of-care for early-stage disease typically involves a lumpectomy or breast-conserving surgery (BCS). BCS involves the local resection of cancerous tissue, while sparring as much healthy tissue as possible. State-of-the-art methods for intraoperatively evaluating BCS margins are limited. Approximately 20% of BCS cases result in a tissue resection with cancer at or near the resection surface (i.e., a positive margin). A two-fold increase in ipsilateral breast cancer recurrence is associated with the presence of one or more positive margins. Consequently, positive margins often necessitate costly re-excision procedures to achieve a curative outcome. X-ray micro-computed tomography (CT) is emerging as a powerful ex vivo specimen imaging technology, as it provides robust three-dimensional sensing of tumor morphology rapidly. However, X-ray attenuation lacks contrast between soft tissues that are important for surgical decision making during BCS. Optical structured light imaging, including spatial frequency domain imaging and active line scan imaging, can act as adjuvant tools to complement micro-CT, providing wide field-of-view, non-contact sensing of relevant breast tissue subtypes on resection margins that cannot be differentiated by micro-CT alone. This thesis is dedicated to multimodal imaging of BCS tissues to ultimately improve intraoperative BCS margin assessment, reducing the number of positive margins after initial surgeries and thereby reducing the need for costly follow-up procedures. Volumetric sensing of micro-CT is combined with surface-weighted, sub-diffuse optical reflectance derived from high spatial frequency structured light imaging. Sub-diffuse reflectance plays the key role of providing enhanced contrast to a suite of normal, abnormal benign, and malignant breast tissue subtypes. This finding is corroborated through clinical studies imaging BCS specimen slices post-operatively and is further investigated through an observational clinical trial focused on combined, intraoperative micro-CT and optical imaging of whole, freshly resected BCS tumors. The central thesis of this work is that combining volumetric X-ray imaging and sub-diffuse optical scanning provides a synergistic multimodal imaging solution to margin assessment, one that can be readily implemented or retrofitted in X-ray specimen imaging systems and that could meaningfully improve surgical guidance during initial BCS procedures

    Noninvasive Assessment of Photoreceptor Structure and Function in the Human Retina

    Get PDF
    The human photoreceptor mosaic underlies the first steps of vision; thus, even subtle defects in the mosaic can result in severe vision loss. The retina can be examined directly using clinical tools; however these devices lack the resolution necessary to visualize the photoreceptor mosaic. The primary limiting factor of these devices is the optical aberrations of the human eye. These aberrations are surmountable with the incorporation of adaptive optics (AO) to ophthalmoscopes, enabling imaging of the photoreceptor mosaic with cellular resolution. Despite the potential of AO imaging, much work remains before this technology can be translated to the clinic. Metrics used in the analysis of AO images are not standardized and are rarely subjected to validation, limiting the ability to reliably track structural changes in the photoreceptor mosaic geometry. Preceding the extraction of measurements, photoreceptors must be identified within the retinal image itself. This introduces error from both incorrectly identified cells and image distortion. We developed a novel method to extract measures of cell spacing from AO images that does not require identification of individual cells. In addition, we examined the sensitivity of various metrics in detecting changes in the mosaic and assessed the absolute accuracy of measurements made in the presence of image distortion. We also developed novel metrics for describing the mosaic, which may offer advantages over more traditional metrics of density and spacing. These studies provide a valuable basis for monitoring the photoreceptor mosaic longitudinally. As part of this work, we developed software (Mosaic Analytics) that can be used to standardize analytical efforts across different research groups. In addition, one of the more salient features of the appearance of individual cone photoreceptors is that they vary considerably in their reflectance. It has been proposed that this reflectance signal could be used as a surrogate measure of cone health. As a first step to understanding the cellular origin of these changes, we examined the reflectance properties of the rod photoreceptor mosaic. The observed variation in rod reflectivity over time suggests a common governing physiological process between rods and cones

    Characterization and Application of Angled Fluorescence Laminar Optical Tomography

    Get PDF
    Angled fluorescence laminar optical tomography (aFLOT) is a modified fluorescence tomographic imaging technique that targets the mesoscopic scale (millimeter penetration with resolution in the tens of microns). Traditional FLOT uses multiple detectors to measure a range of scattered fluorescence signals to perform 3D reconstructions. This technology however inherently assumes the sample to be scattering. To extend the capability of FLOT to cover the low scattering regime, the oblique illumination and detection was introduced. The angular degree of freedom for the illumination and detection was theoretically and experimentally investigated. It was concluded that aFLOT enhanced resolution 2.5 times and depth selectivity compared to traditional FLOT, and that it enabled the stacking representation, a process that skips the computationally-intensive reconstruction usually needed to render the tomogram. Because stacking is enabled, the necessity of a reconstruction process is retrospectively discussed. aFLOT systems were constructed and applied in tissue engineering. Phantoms and engineered tissue models were successfully imaged. The aFLOT was shown to perform non-invasive in situ imaging in biologically relevant samples with 1mm penetration and 9-400 micron resolution, depending on the scattering of samples. aFLOT illustrates its potential for studying cell-cell or cell-material interactions

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken
    corecore