86 research outputs found

    Deep Learning Models for CT Image Standardization

    Get PDF
    Multicentric CT imaging studies often encounter images acquired with scanners from different vendors or using different reconstruction algorithms. This leads to inconsistencies in noise level, sharpness, and edge enhancement, resulting in a lack of homogeneity in radiomic characteristics. These inconsistencies create significant variations in radiomic features and ambiguity in data sharing across different institutions. Therefore, normalizing CT images acquired using non-standardized protocols is vital for decision-making in cross-center large-scale data sharing and radiomics studies. To address this issue, we present four end-to-end deep-learning-based models for CT image standardization and normalization. The first two models require paired training data and can standardize images acquired from the same scanner but with different non-standardized protocols. The third model requires unpaired training data and can standardize images from one protocol to another. The final model is more robust and can utilize both paired and unpaired data during training. It can be used to standardize images within a scanner or between scanners. All the models\u27 performances were evaluated based on the radiomic features. Our experimental results show that the proposed models can effectively reduce scanner-related radiomic feature variations and improve the reliability of CT imaging radiomic features

    Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics

    Full text link
    In prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients' risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these "big data" in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer

    Pattern classification approaches for breast cancer identification via MRI: stateā€ofā€theā€art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current stateā€ofā€theā€art computerā€aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multiā€parametric computerā€aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semiā€supervised deep learning and selfā€supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a highā€dimensional medical imaging analysis platform that is based on multiā€task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCEā€MRI. Since some of the approaches discussed are also based on timeā€lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    Radiomics and Deep Learning in Brain Metastases: Current Trends and Roadmap to Future Applications

    Get PDF
    Advances in radiomics and deep learning (DL) hold great potential to be at the forefront of precision medicine for the treatment of patients with brain metastases. Radiomics and DL can aid clinical decision-making by enabling accurate diagnosis, facilitating the identification of molecular markers, providing accurate prognoses, and monitoring treatment response. In this review, we summarize the clinical background, unmet needs, and current state of research of radiomics and DL for the treatment of brain metastases. The promises, pitfalls, and future roadmap of radiomics and DL in brain metastases are addressed as well.ope

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Brain Tumor Growth Modelling .

    Get PDF
    Prediction methods of Glioblastoma tumors growth constitute a hard task due to the lack of medical data, which is mostly related to the patientsā€™ privacy, the cost of collecting a large medical dataset, and the availability of related notations by experts. In this thesis, we study and propose a Synthetic Medical Image Generator (SMIG) with the purpose of generating synthetic data based on Generative Adversarial Network in order to provide anonymized data. In addition, to predict the Glioblastoma multiform (GBM) tumor growth we developed a Tumor Growth Predictor (TGP) based on End to End Convolution Neural Network architecture that allows training on a public dataset from The Cancer Imaging Archive (TCIA), combined with the generated synthetic data. We also highlighted the impact of implicating a synthetic data generated using SMIG as a data augmentation tool. Despite small data size provided by TCIA dataset, the obtained results demonstrate valuable tumor growth prediction accurac

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
    • ā€¦
    corecore