1,859 research outputs found

    Automatic analysis of medical images for change detection in prostate cancer

    Get PDF
    Prostate cancer is the most common cancer and second most common cause of cancer death in men in the UK. However, the patient risk from the cancer can vary considerably, and the widespread use of prostate-specific antigen (PSA) screening has led to over-diagnosis and over-treatment of low-grade tumours. It is therefore important to be able to differentiate high-grade prostate cancer from the slowly- growing, low-grade cancer. Many of these men with low-grade cancer are placed on active surveillance (AS), which involves constant monitoring and intervention for risk reclassification, relying increasingly on magnetic resonance imaging (MRI) to detect disease progression, in addition to TRUS-guided biopsies which are the routine clinical standard method to use. This results in a need for new tools to process these images. For this purpose, it is important to have a good TRUS-MR registration so corresponding anatomy can be located accurately between the two. Automatic segmentation of the prostate gland on both modalities reduces some of the challenges of the registration, such as patient motion, tissue deformation, and the time of the procedure. This thesis focuses on the use of deep learning methods, specifically convolutional neural networks (CNNs), for prostate cancer management. Chapters 4 and 5 investigated the use of CNNs for both TRUS and MRI prostate gland segmentation, and reported high segmentation accuracies for both, Dice Score Coefficients (DSC) of 0.89 for TRUS segmentations and DSCs between 0.84-0.89 for MRI prostate gland segmentation using a range of networks. Chapter 5 also investigated the impact of these segmentation scores on more clinically relevant measures, such as MRI-TRUS registration errors and volume measures, showing that a statistically significant difference in DSCs did not lead to a statistically significant difference in the clinical measures using these segmentations. The potential of these algorithms in commercial and clinical systems are summarised and the use of the MRI prostate gland segmentation in the application of radiological prostate cancer progression prediction for AS patients are investigated and discussed in Chapter 8, which shows statistically significant improvements in accuracy when using spatial priors in the form of prostate segmentations (0.63 ± 0.16 vs. 0.82 ± 0.18 when comparing whole prostate MRI vs. only prostate gland region, respectively)

    Neuroimaging of structural pathology and connectomics in traumatic brain injury: Toward personalized outcome prediction.

    Get PDF
    Recent contributions to the body of knowledge on traumatic brain injury (TBI) favor the view that multimodal neuroimaging using structural and functional magnetic resonance imaging (MRI and fMRI, respectively) as well as diffusion tensor imaging (DTI) has excellent potential to identify novel biomarkers and predictors of TBI outcome. This is particularly the case when such methods are appropriately combined with volumetric/morphometric analysis of brain structures and with the exploration of TBI-related changes in brain network properties at the level of the connectome. In this context, our present review summarizes recent developments on the roles of these two techniques in the search for novel structural neuroimaging biomarkers that have TBI outcome prognostication value. The themes being explored cover notable trends in this area of research, including (1) the role of advanced MRI processing methods in the analysis of structural pathology, (2) the use of brain connectomics and network analysis to identify outcome biomarkers, and (3) the application of multivariate statistics to predict outcome using neuroimaging metrics. The goal of the review is to draw the community's attention to these recent advances on TBI outcome prediction methods and to encourage the development of new methodologies whereby structural neuroimaging can be used to identify biomarkers of TBI outcome

    An Artificial Intelligence Approach to Tumor Volume Delineation

    Get PDF
    Postponed access: the file will be accessible after 2023-11-14Masteroppgave for radiograf/bioingeniørRABD395MAMD-HELS

    Computational Modeling for Abnormal Brain Tissue Segmentation, Brain Tumor Tracking, and Grading

    Get PDF
    This dissertation proposes novel texture feature-based computational models for quantitative analysis of abnormal tissues in two neurological disorders: brain tumor and stroke. Brain tumors are the cells with uncontrolled growth in the brain tissues and one of the major causes of death due to cancer. On the other hand, brain strokes occur due to the sudden interruption of the blood supply which damages the normal brain tissues and frequently causes death or persistent disability. Clinical management of these brain tumors and stroke lesions critically depends on robust quantitative analysis using different imaging modalities including Magnetic Resonance (MR) and Digital Pathology (DP) images. Due to uncontrolled growth and infiltration into the surrounding tissues, the tumor regions appear with a significant texture variation in the static MRI volume and also in the longitudinal imaging study. Consequently, this study developed computational models using novel texture features to segment abnormal brain tissues (tumor, and stroke lesions), tracking the change of tumor volume in longitudinal images, and tumor grading in MR images. Manual delineation and analysis of these abnormal tissues in large scale is tedious, error-prone, and often suffers from inter-observer variability. Therefore, efficient computational models for robust segmentation of different abnormal tissues is required to support the diagnosis and analysis processes. In this study, brain tissues are characterized with novel computational modeling of multi-fractal texture features for multi-class brain tumor tissue segmentation (BTS) and extend the method for ischemic stroke lesions in MRI. The robustness of the proposed segmentation methods is evaluated using a huge amount of private and public domain clinical data that offers competitive performance when compared with that of the state-of-the-art methods. Further, I analyze the dynamic texture behavior of tumor volume in longitudinal imaging and develop post-processing frame-work using three-dimensional (3D) texture features. These post-processing methods are shown to reduce the false positives in the BTS results and improve the overall segmentation result in longitudinal imaging. Furthermore, using this improved segmentation results the change of tumor volume has been quantified in three types such as stable, progress, and shrinkage as observed by the volumetric changes of different tumor tissues in longitudinal images. This study also investigates a novel non-invasive glioma grading, for the first time in literature, that uses structural MRI only. Such non-invasive glioma grading may be useful before an invasive biopsy is recommended. This study further developed an automatic glioma grading scheme using the invasive cell nuclei morphology in DP images for cross-validation with the same patients. In summary, the texture-based computational models proposed in this study are expected to facilitate the clinical management of patients with the brain tumors and strokes by automating large scale imaging data analysis, reducing human error, inter-observer variability, and producing repeatable brain tumor quantitation and grading

    Phenomenological model of diffuse global and regional atrophy using finite-element methods

    Get PDF
    The main goal of this work is the generation of ground-truth data for the validation of atrophy measurement techniques, commonly used in the study of neurodegenerative diseases such as dementia. Several techniques have been used to measure atrophy in cross-sectional and longitudinal studies, but it is extremely difficult to compare their performance since they have been applied to different patient populations. Furthermore, assessment of performance based on phantom measurements or simple scaled images overestimates these techniques' ability to capture the complexity of neurodegeneration of the human brain. We propose a method for atrophy simulation in structural magnetic resonance (MR) images based on finite-element methods. The method produces cohorts of brain images with known change that is physically and clinically plausible, providing data for objective evaluation of atrophy measurement techniques. Atrophy is simulated in different tissue compartments or in different neuroanatomical structures with a phenomenological model. This model of diffuse global and regional atrophy is based on volumetric measurements such as the brain or the hippocampus, from patients with known disease and guided by clinical knowledge of the relative pathological involvement of regions and tissues. The consequent biomechanical readjustment of structures is modelled using conventional physics-based techniques based on biomechanical tissue properties and simulating plausible tissue deformations with finite-element methods. A thermoelastic model of tissue deformation is employed, controlling the rate of progression of atrophy by means of a set of thermal coefficients, each one corresponding to a different type of tissue. Tissue characterization is performed by means of the meshing of a labelled brain atlas, creating a reference volumetric mesh that will be introduced to a finite-element solver to create the simulated deformations. Preliminary work on the simulation of acquisition artefa- - cts is also presented. Cross-sectional and

    DeepEOR: automated perioperative volumetric assessment of variable grade gliomas using deep learning

    Full text link
    PURPOSE Volumetric assessments, such as extent of resection (EOR) or residual tumor volume, are essential criterions in glioma resection surgery. Our goal is to develop and validate segmentation machine learning models for pre- and postoperative magnetic resonance imaging scans, allowing us to assess the percentagewise tumor reduction after intracranial surgery for gliomas. METHODS For the development of the preoperative segmentation model (U-Net), MRI scans of 1053 patients from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2021 as well as from patients who underwent surgery at the University Hospital in Zurich were used. Subsequently, the model was evaluated on a holdout set containing 285 images from the same sources. The postoperative model was developed using 72 scans and validated on 45 scans obtained from the BraTS 2015 and Zurich dataset. Performance is evaluated using Dice Similarity score, Jaccard coefficient and Hausdorff 95%. RESULTS We were able to achieve an overall mean Dice Similarity Score of 0.59 and 0.29 on the pre- and postoperative holdout sets, respectively. Our algorithm managed to determine correct EOR in 44.1%. CONCLUSION Although our models are not suitable for clinical use at this point, the possible applications are vast, going from automated lesion detection to disease progression evaluation. Precise determination of EOR is a challenging task, but we managed to show that deep learning can provide fast and objective estimates

    Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI

    Get PDF
    This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction. Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria. Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019
    • …
    corecore