1,957 research outputs found

    Development of an Atlas-Based Segmentation of Cranial Nerves Using Shape-Aware Discrete Deformable Models for Neurosurgical Planning and Simulation

    Get PDF
    Twelve pairs of cranial nerves arise from the brain or brainstem and control our sensory functions such as vision, hearing, smell and taste as well as several motor functions to the head and neck including facial expressions and eye movement. Often, these cranial nerves are difficult to detect in MRI data, and thus represent problems in neurosurgery planning and simulation, due to their thin anatomical structure, in the face of low imaging resolution as well as image artifacts. As a result, they may be at risk in neurosurgical procedures around the skull base, which might have dire consequences such as the loss of eyesight or hearing and facial paralysis. Consequently, it is of great importance to clearly delineate cranial nerves in medical images for avoidance in the planning of neurosurgical procedures and for targeting in the treatment of cranial nerve disorders. In this research, we propose to develop a digital atlas methodology that will be used to segment the cranial nerves from patient image data. The atlas will be created from high-resolution MRI data based on a discrete deformable contour model called 1-Simplex mesh. Each of the cranial nerves will be modeled using its centerline and radius information where the centerline is estimated in a semi-automatic approach by finding a shortest path between two user-defined end points. The cranial nerve atlas is then made more robust by integrating a Statistical Shape Model so that the atlas can identify and segment nerves from images characterized by artifacts or low resolution. To the best of our knowledge, no such digital atlas methodology exists for segmenting nerves cranial nerves from MRI data. Therefore, our proposed system has important benefits to the neurosurgical community

    OCT Signal Enhancement with Deep Learning

    Get PDF
    PURPOSE: To establish whether deep learning methods are able to improve the signal-to-noise ratio of time-domain (TD) optical coherence tomography (OCT) images to approach that of spectral-domain (SD) OCT. DESIGN: Method agreement study and progression-detection in a randomized, double-masked, placebo-controlled, multi-centre trial for open-angle glaucoma (OAG) [UK Glaucoma Treatment Study (UKGTS)]. PARTICIPANTS: Cohort for training and validation: 77 stable OAG participants with TDOCT and SDOCT imaging at up to 11 visits within 3 months. Cohort for testing: 284 newly-diagnosed OAG patients with TDOCT from a cohort of 516 recruited at 10 UK centres between 2007 and 2010. METHODS: An ensemble of generative adversarial networks (GANs) was trained on TDOCT and SDOCT image pairs from the training dataset and applied to TDOCT images from the testing dataset. TDOCT were converted to synthesized SDOCT images and segmented via Bayesian fusion on the output of the GANs. MAIN OUTCOME MEASURES: 1) Bland-Altman analysis to assess agreement between TDOCT and synthesized SDOCT average retinal nerve fibre layer thickness (RNFLT) measurements and the SDOCT RNFLT. 2) Analysis of the distribution of the rates of RNFLT change in TDOCT and synthesized SDOCT in the two treatments arms of the UKGTS was compared. A Cox model for predictors of time-to-incident VF progression was computed with the TDOCT and the synthesized SDOCT. RESULTS: The 95% limits of agreement between TDOCT and SDOCT were [26.64, -22.95], between synthesized SDOCT and SDOCT were [8.11, -6.73], and between SDOCT and SDOCT were [4.16, -4.04]. The mean difference in the rate of RNFL change between UKGTS treatment and placebo arms with TDOCT was 0.24 (p=0.11) and with synthesized SDOCT was 0.43 (p=0.0017). The hazard ratio for RNFLT slope in Cox regression modeling for time to incident VF progression was 1.09 (95% CI 1.02 to 1.21) (p=0.035) for TDOCT and 1.24 (95% CI 1.08 to 1.39) (p=0.011) for synthesized SDOCT. CONCLUSIONS: Image enhancement significantly improved the agreement of TDOCT RNFLT measurements with SDOCT RNFLT measurements. The difference, and its significance, in rates of RNFLT change in the UKGTS treatment arms was enhanced and RNFLT change became a stronger predictor of VF progression

    Deep learning-based improvement for the outcomes of glaucoma clinical trials

    Get PDF
    Glaucoma is the leading cause of irreversible blindness worldwide. It is a progressive optic neuropathy in which retinal ganglion cell (RGC) axon loss, probably as a consequence of damage at the optic disc, causes a loss of vision, predominantly affecting the mid-peripheral visual field (VF). Glaucoma results in a decrease in vision-related quality of life and, therefore, early detection and evaluation of disease progression rates is crucial in order to assess the risk of functional impairment and to establish sound treatment strategies. The aim of my research is to improve glaucoma diagnosis by enhancing state of the art analyses of glaucoma clinical trial outcomes using advanced analytical methods. This knowledge would also help better design and analyse clinical trials, providing evidence for re-evaluating existing medications, facilitating diagnosis and suggesting novel disease management. To facilitate my objective methodology, this thesis provides the following contributions: (i) I developed deep learning-based super-resolution (SR) techniques for optical coherence tomography (OCT) image enhancement and demonstrated that using super-resolved images improves the statistical power of clinical trials, (ii) I developed a deep learning algorithm for segmentation of retinal OCT images, showing that the methodology consistently produces more accurate segmentations than state-of-the-art networks, (iii) I developed a deep learning framework for refining the relationship between structural and functional measurements and demonstrated that the mapping is significantly improved over previous techniques, iv) I developed a probabilistic method and demonstrated that glaucomatous disc haemorrhages are influenced by a possible systemic factor that makes both eyes bleed simultaneously. v) I recalculated VF slopes, using the retinal never fiber layer thickness (RNFLT) from the super-resolved OCT as a Bayesian prior and demonstrated that use of VF rates with the Bayesian prior as the outcome measure leads to a reduction in the sample size required to distinguish treatment arms in a clinical trial

    Improving statistical power of glaucoma clinical trials using an ensemble of cyclical generative adversarial networks

    Get PDF
    Albeit spectral-domain OCT (SDOCT) is now in clinical use for glaucoma management, published clinical trials relied on time-domain OCT (TDOCT) which is characterized by low signal-to-noise ratio, leading to low statistical power. For this reason, such trials require large numbers of patients observed over long intervals and become more costly. We propose a probabilistic ensemble model and a cycle-consistent perceptual loss for improving the statistical power of trials utilizing TDOCT. TDOCT are converted to synthesized SDOCT and segmented via Bayesian fusion of an ensemble of GANs. The final retinal nerve fibre layer segmentation is obtained automatically on an averaged synthesized image using label fusion. We benchmark different networks using i) GAN, ii) Wasserstein GAN (WGAN) (iii) GAN + perceptual loss and iv) WGAN + perceptual loss. For training and validation, an independent dataset is used, while testing is performed on the UK Glaucoma Treatment Study (UKGTS), i.e. a TDOCT-based trial. We quantify the statistical power of the measurements obtained with our method, as compared with those derived from the original TDOCT. The results provide new insights into the UKGTS, showing a significantly better separation between treatment arms, while improving the statistical power of TDOCT on par with visual field measurements

    On-scalp MEG sensor localization using magnetic dipole-like coils: A method for highly accurate co-registration

    Get PDF
    Source modelling in magnetoencephalography (MEG) requires precise co-registration of the sensor array and the anatomical structure of the measured individual\u27s head. In conventional MEG, the positions and orientations of the sensors relative to each other are fixed and known beforehand, requiring only localization of the head relative to the sensor array. Since the sensors in on-scalp MEG are positioned on the scalp, locations of the individual sensors depend on the subject\u27s head shape and size. The positions and orientations of on-scalp sensors must therefore be measured a every recording. This can be achieved by inverting conventional head localization, localizing the sensors relative to the head - rather than the other way around. In this study we present a practical method for localizing sensors using magnetic dipole-like coils attached to the subject\u27s head. We implement and evaluate the method in a set of on-scalp MEG recordings using a 7-channel on-scalp MEG system based on high critical temperature superconducting quantum interference devices (high-T-c SQUIDs). The method allows individually localizing the sensor positions, orientations, and responsivities with high accuracy using only a short averaging time (<= 2 mm, < 3 degrees and < 3%, respectively, with 1-s averaging), enabling continuous sensor localization. Calibrating and jointly localizing the sensor array can further improve the accuracy of position and orientation (< 1 mm and < 1 degrees, respectively, with 1-s coil recordings). We demonstrate source localization of on-scalp recorded somatosensory evoked activity based on coregistration with our method. Equivalent current dipole fits of the evoked responses corresponded well (within 4.2 mm) with those based on a commercial, whole-head MEG system

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty
    • …
    corecore