1,750 research outputs found

    QUANTITATIVE IMAGING FOR PRECISION MEDICINE IN HEAD AND NECK CANCER PATIENTS

    Get PDF
    The purpose of this work was to determine if prediction models using quantitative imaging measures in head and neck squamous cell carcinoma (HNSCC) patients could be improved when noise due to imaging was reduced. This was investigated separately for salivary gland function using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI), overall survival using computed tomography (CT)-based radiomics, and overall survival using positron emission tomography (PET)-based radiomics. From DCE-MRI, where T1-weighted images are serially acquired after injection of contrast, quantitative measures of diffusion can be obtained from the series of images. Radiomics is the study of the relationship of voxels to one another providing measures of texture from the area of interest. Quantitative information obtained from imaging could help in radiation treatment planning by providing quantifiable spatial information with computational models for assigning dose to regions to improve patient outcome, both survival and quality of life. By reducing the noise within the quantitative data, the prediction accuracy could improve to move this type of work closer to clinical practice. For each imaging modality sources of noise that could impact the patient analysis were identified, quantified, and if possible minimized during the patient analysis. In MRI, a large potential source of uncertainty was the image registration. To evaluate this, both physical and synthetic phantoms were used, which showed that registration of MR images was high, with all root mean square errors below 3 mm. Then, 15 HNSCC patients with pre-, mid-, and post-treatment DCE-MRI scans were evaluated. However, differences in algorithm output were found to be a large source of noise as different algorithms could not consistently rank patients as above or below the median for quantitative metrics from DCE-MRI. Therefore, further analysis using this modality was not pursued. In CT, a large potential source of noise that could impact patient analysis was the inter-scanner variability. To investigate this a controlled protocol was designed and used to image, along with the local head and chest protocols, a radiomics phantom on 100 CT scanners. This demonstrated that the inter-scanner variability could be reduced by over 50% using a controlled protocol compared to local protocols. Additionally, it was shown that the reconstruction parameters impact feature values while most acquisition parameters do not, therefore, most of this benefit can be achieved using a radiomics reconstruction with no additional dose to the patient. Then to evaluate this impact in patient studies, 726 HNSCC patients with CT images were used to create and test a Cox proportional hazards model for overall survival. Those patients with the same imaging protocol were subset and a new Cox proportional hazards model was created and tested in order to determine if the reduction in noise due to controlling the imaging protocol translated into improved prediction. However, noise between patient populations from different institutions was shown to be larger than the reduction in noise due to a controlled imaging protocol. In PET, a large potential source of noise that could impact patient analysis was the imaging protocol. A phantom scanned on three different scanners and vendors demonstrated that on a single vendor, imaging parameter choices did not affect radiomics feature values, but inter-scanner variances could be large. Then, 686 HNSCC patients with PET images were used to create and test a Cox proportional hazards model for overall survival. Those patients with the same imaging protocol were subset and a new Cox proportional hazards model was created and tested in order to determine if the reduction in noise due to controlling the imaging protocol on a vendor translated into improved prediction. However, no predictive radiomics signature could be determined for any subset of the patient cohort that resulted in significant stratification of patients into high and low risk. This study demonstrated that the imaging variability could be quantified and controlled for in each modality. However, for each modality there were larger sources of noise identified that did not allow for improvement in prediction modeling of salivary gland function or overall survival using quantitative imaging metrics for MRI, CT, or PET

    Quantitative imaging analysis:challenges and potentials

    Get PDF

    Deep Learning for Medical Imaging in a Biased Environment

    Get PDF
    Deep learning (DL) based applications have successfully solved numerous problems in machine perception. In radiology, DL-based image analysis systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists\u27 workflow. However, many DL-based radiological systems fail to generalize when deployed in new hospital settings, and the causes of these failures are not always clear. Although significant effort continues to be invested in applying DL algorithms to radiological data, many open questions and issues that arise from incomplete datasets remain. To bridge the gap, we first review the current state of artificial intelligence applied to radiology data, followed by juxtaposing the use of classical computer vision features (i.e., hand-crafted features) with the recent advances caused by deep learning. However, using DL is not an excuse for a lack of rigorous study design, which we demonstrate by proposing sanity tests that determine when a DL system is right for the wrong reasons. Having established the appropriate way to assess DL systems, we then turn to improve their efficacy and generalizability by leveraging prior information about human physiology and data derived from dual energy computed tomography scans. In this dissertation, we address the gaps in the radiology literature by introducing new tools, testing strategies, and methods to mitigate the influence of dataset biases

    Quantitative imaging in radiation oncology

    Get PDF
    Artificially intelligent eyes, built on machine and deep learning technologies, can empower our capability of analysing patients’ images. By revealing information invisible at our eyes, we can build decision aids that help our clinicians to provide more effective treatment, while reducing side effects. The power of these decision aids is to be based on patient tumour biologically unique properties, referred to as biomarkers. To fully translate this technology into the clinic we need to overcome barriers related to the reliability of image-derived biomarkers, trustiness in AI algorithms and privacy-related issues that hamper the validation of the biomarkers. This thesis developed methodologies to solve the presented issues, defining a road map for the responsible usage of quantitative imaging into the clinic as decision support system for better patient care

    Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions

    Get PDF
    Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
    corecore