204 research outputs found

    Phenotyping the histopathological subtypes of non-small-cell lung carcinoma: how beneficial is radiomics?

    Get PDF
    The aim of this study was to investigate the usefulness of radiomics in the absence of well-defined standard guidelines. Specifically, we extracted radiomics features from multicenter computed tomography (CT) images to differentiate between the four histopathological subtypes of non-small-cell lung carcinoma (NSCLC). In addition, the results that varied with the radiomics model were compared. We investigated the presence of the batch effects and the impact of feature harmonization on the models' performance. Moreover, the question on how the training dataset composition influenced the selected feature subsets and, consequently, the model's performance was also investigated. Therefore, through combining data from the two publicly available datasets, this study involves a total of 152 squamous cell carcinoma (SCC), 106 large cell carcinoma (LCC), 150 adenocarcinoma (ADC), and 58 no other specified (NOS). Through the matRadiomics tool, which is an example of Image Biomarker Standardization Initiative (IBSI) compliant software, 1781 radiomics features were extracted from each of the malignant lesions that were identified in CT images. After batch analysis and feature harmonization, which were based on the ComBat tool and were integrated in matRadiomics, the datasets (the harmonized and the non-harmonized) were given as an input to a machine learning modeling pipeline. The following steps were articulated: (i) training-set/test-set splitting (80/20); (ii) a Kruskal-Wallis analysis and LASSO linear regression for the feature selection; (iii) model training; (iv) a model validation and hyperparameter optimization; and (v) model testing. Model optimization consisted of a 5-fold cross-validated Bayesian optimization, repeated ten times (inner loop). The whole pipeline was repeated 10 times (outer loop) with six different machine learning classification algorithms. Moreover, the stability of the feature selection was evaluated. Results showed that the batch effects were present even if the voxels were resampled to an isotropic form and whether feature harmonization correctly removed them, even though the models' performances decreased. Moreover, the results showed that a low accuracy (61.41%) was reached when differentiating between the four subtypes, even though a high average area under curve (AUC) was reached (0.831). Further, a NOS subtype was classified as almost completely correct (true positive rate similar to 90%). The accuracy increased (77.25%) when only the SCC and ADC subtypes were considered, as well as when a high AUC (0.821) was obtained-although harmonization decreased the accuracy to 58%. Moreover, the features that contributed the most to models' performance were those extracted from wavelet decomposed and Laplacian of Gaussian (LoG) filtered images and they belonged to the texture feature class.. In conclusion, we showed that our multicenter data were affected by batch effects, that they could significantly alter the models' performance, and that feature harmonization correctly removed them. Although wavelet features seemed to be the most informative features, an absolute subset could not be identified since it changed depending on the training/testing splitting. Moreover, performance was influenced by the chosen dataset and by the machine learning methods, which could reach a high accuracy in binary classification tasks, but could underperform in multiclass problems.It is, therefore, essential that the scientific community propose a more systematic radiomics approach, focusing on multicenter studies, with clear and solid guidelines to facilitate the translation of radiomics to clinical practice

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Dynamic And Quantitative Radiomics Analysis In Interventional Radiology

    Get PDF
    Interventional Radiology (IR) is a subspecialty of radiology that performs invasive procedures driven by diagnostic imaging for predictive and therapeutic purpose. The development of artificial intelligence (AI) has revolutionized the industry of IR. Researchers have created sophisticated models backed by machine learning algorithms and optimization methodologies for image registration, cellular structure detection and computer-aided disease diagnosis and prognosis predictions. However, due to the incapacity of the human eye to detect tiny structural characteristics and inter-radiologist heterogeneity, conventional experience-based IR visual evaluations may have drawbacks. Radiomics, a technique that utilizes machine learning, offers a practical and quantifiable solution to this issue. This technology has been used to evaluate the heterogeneity of malignancies that are difficult to detect by the human eye by creating an automated pipeline for the extraction and analysis of high throughput computational imaging characteristics from radiological medical pictures. However, it is a demanding task to directly put radiomics into applications in IR because of the heterogeneity and complexity of medical imaging data. Furthermore, recent radiomics studies are based on static images, while many clinical applications (such as detecting the occurrence and development of tumors and assessing patient response to chemotherapy and immunotherapy) is a dynamic process. Merely incorporating static features cannot comprehensively reflect the metabolic characteristics and dynamic processes of tumors or soft tissues. To address these issues, we proposed a robust feature selection framework to manage the high-dimensional small-size data. Apart from that, we explore and propose a descriptor in the view of computer vision and physiology by integrating static radiomics features with time-varying information in tumor dynamics. The major contributions to this study include: Firstly, we construct a result-driven feature selection framework, which could efficiently reduce the dimension of the original feature set. The framework integrates different feature selection techniques to ensure the distinctiveness, uniqueness, and generalization ability of the output feature set. In the task of classification hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) in primary liver cancer, only three radiomics features (chosen from more than 1, 800 features of the proposed framework) can obtain an AUC of 0.83 in the independent dataset. Besides, we also analyze features’ pattern and contributions to the results, enhancing clinical interpretability of radiomics biomarkers. Secondly, we explore and build a pulmonary perfusion descriptor based on 18F-FDG whole-body dynamic PET images. Our major novelties include: 1) propose a physiology-and-computer-vision-interpretable descriptor construction framework by the decomposition of spatiotemporal information into three dimensions: shades of grey levels, textures, and dynamics. 2) The spatio-temporal comparison of pulmonary descriptor intra and inter patients is feasible, making it possible to be an auxiliary diagnostic tool in pulmonary function assessment. 3) Compared with traditional PET metabolic biomarker analysis, the proposed descriptor incorporates image’s temporal information, which enables a better understanding of the time-various mechanisms and detection of visual perfusion abnormalities among different patients. 4) The proposed descriptor eliminates the impact of vascular branching structure and gravity effect by utilizing time warping algorithms. Our experimental results showed that our proposed framework and descriptor are promising tools to medical imaging analysis

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Advanced MRI methods for probing disease severity and functional decline in multiple sclerosis

    Get PDF
    Multiple sclerosis (MS) is a chronic and severe disease of the central nervous system characterized by complex pathology including inflammatory demyelination and neurodegeneration. MS impacts >2.8 million people worldwide, with most starting with a relapsing-remitting form (RRMS) in young adulthood, and many of them worsening to a secondary-progressive course (SPMS) despite treatment. So, there is a clear need for improved disease characterization. MRI is an ideal tool for non-invasive assessment of MS pathology, but there is still no established measure of disease activity and functional consequences. This project aims to overcome the challenge by developing novel imaging measures based on brain diffusion MRI and phase congruency texture analysis of conventional MRI. Through advanced modeling and analysis of clinically feasible brain MRI, this thesis investigates whether and how the derived measures differentiate MS pathology types and disease severity and predict functional outcomes in MS. The overall process has led to important technical innovations in several aspects. These include: innovative modeling of simple diffusion acquisitions to generate high angular resolution diffusion imaging (HARDI) measures; new optimization and harmonization techniques for diffusion MRI; innovative neural network models to create new diffusion data for comprehensive HARDI modeling; and novel methods and a graphic user interface for optimizing phase congruency analyses. Assisted by different machine learning methods, collective findings show that advanced measures from both diffusion MRI and phase congruency are highly sensitive to subtle differences in MS pathology, which differentiate disease severity between RRMS and SPMS through multi-dimensional analyses including chronic active lesions, and predict functional outcomes especially in physical and neurocognitive domains. These results are clinically translational and the new measures and techniques can help improve the evaluation and management of both MS and similar diseases

    Quantitative imaging in radiation oncology

    Get PDF
    Artificially intelligent eyes, built on machine and deep learning technologies, can empower our capability of analysing patients’ images. By revealing information invisible at our eyes, we can build decision aids that help our clinicians to provide more effective treatment, while reducing side effects. The power of these decision aids is to be based on patient tumour biologically unique properties, referred to as biomarkers. To fully translate this technology into the clinic we need to overcome barriers related to the reliability of image-derived biomarkers, trustiness in AI algorithms and privacy-related issues that hamper the validation of the biomarkers. This thesis developed methodologies to solve the presented issues, defining a road map for the responsible usage of quantitative imaging into the clinic as decision support system for better patient care

    A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom

    Full text link
    Multimodal medical data fusion has emerged as a transformative approach in smart healthcare, enabling a comprehensive understanding of patient health and personalized treatment plans. In this paper, a journey from data to information to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart healthcare. We present a comprehensive review of multimodal medical data fusion focused on the integration of various data modalities. The review explores different approaches such as feature selection, rule-based systems, machine learning, deep learning, and natural language processing, for fusing and analyzing multimodal data. This paper also highlights the challenges associated with multimodal fusion in healthcare. By synthesizing the reviewed frameworks and theories, it proposes a generic framework for multimodal medical data fusion that aligns with the DIKW model. Moreover, it discusses future directions related to the four pillars of healthcare: Predictive, Preventive, Personalized, and Participatory approaches. The components of the comprehensive survey presented in this paper form the foundation for more successful implementation of multimodal fusion in smart healthcare. Our findings can guide researchers and practitioners in leveraging the power of multimodal fusion with the state-of-the-art approaches to revolutionize healthcare and improve patient outcomes.Comment: This work has been submitted to the ELSEVIER for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Mri-Based Radiomics in Breast Cancer:Optimization and Prediction

    Get PDF
    • …
    corecore