2,875 research outputs found

    Radiogenomics Framework for Associating Medical Image Features with Tumour Genetic Characteristics

    Get PDF
    Significant progress has been made in the understanding of human cancers at the molecular genetics level and it is providing new insights into their underlying pathophysiology. This progress has enabled the subclassification of the disease and the development of targeted therapies that address specific biological pathways. However, obtaining genetic information remains invasive and costly. Medical imaging is a non-invasive technique that captures important visual characteristics (i.e. image features) of abnormalities and plays an important role in routine clinical practice. Advancements in computerised medical image analysis have enabled quantitative approaches to extract image features that can reflect tumour genetic characteristics, leading to the emergence of ‘radiogenomics’. Radiogenomics investigates the relationships between medical imaging features and tumour molecular characteristics, and enables the derivation of imaging surrogates (radiogenomics features) to genetic biomarkers that can provide alternative approaches to non-invasive and accurate cancer diagnosis. This thesis presents a new framework that combines several novel methods for radiogenomics analysis that associates medical image features with tumour genetic characteristics, with the main objectives being: i) a comprehensive characterisation of tumour image features that reflect underlying genetic information; ii) a method that identifies radiogenomics features encoding common pathophysiological information across different diseases, overcoming the dependence on large annotated datasets; and iii) a method that quantifies radiogenomics features from multi-modal imaging data and accounts for unique information encoded in tumour heterogeneity sub-regions. The present radiogenomics methods advance radiogenomics analysis and contribute to improving research in computerised medical image analysis

    Current State-of-the-Art of AI Methods Applied to MRI

    Get PDF
    Di Noia, C., Grist, J. T., Riemer, F., Lyasheva, M., Fabozzi, M., Castelli, M., Lodi, R., Tonon, C., Rundo, L., & Zaccagna, F. (2022). Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics, 12(9), 1-16. [2125]. https://doi.org/10.3390/diagnostics12092125Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.publishersversionpublishe

    TEXTURAL CLASSIFICATION OF MULTIPLE SCLEROSISLESIONS IN MULTIMODAL MRI VOLUMES

    Get PDF
    Background and objectives:Multiple Sclerosis is a common relapsing demyelinating diseasecausing the significant degradation of cognitive and motor skills and contributes towards areduced life expectancy of 5 to 10 years. The identification of Multiple Sclerosis Lesionsat early stages of a patient’s life can play a significant role in the diagnosis, treatment andprognosis for that individual. In recent years the process of disease detection has been aidedthrough the implementation of radiomic pipelines for texture extraction and classificationutilising Computer Vision and Machine Learning techniques. Eight Multiple Sclerosis Patient datasets have been supplied, each containing one standardclinical T2 MRI sequence and four diffusion-weighted sequences (T2, FA, ADC, AD, RD).This work proposes a Multimodal Multiple Sclerosis Lesion segmentation methodology util-ising supervised texture analysis, feature selection and classification. Three Machine Learningmodels were applied to Multimodal MRI data and tested using unseen patient datasets to eval-uate the classification performance of various extracted features, feature selection algorithmsand classifiers to MRI volumes uncommonly applied to MS Lesion detection. Method: First Order Statistics, Haralick Texture Features, Gray-Level Run-Lengths, His-togram of Oriented Gradients and Local Binary Patterns were extracted from MRI volumeswhich were minimally pre-processed using a skull stripping and background removal algorithm.mRMR and LASSO feature selection algorithms were applied to identify a subset of rankingsfor use in Machine Learning using Support Vector Machine, Random Forests and ExtremeLearning Machine classification. Results: ELM achieved a top slice classification accuracy of 85% while SVM achieved 79%and RF 78%. It was found that combining information from all MRI sequences increased theclassification performance when analysing unseen T2 scans in almost all cases. LASSO andmRMR feature selection methods failed to increase accuracy, and the highest-scoring groupof features were Haralick Texture Features, derived from Grey-Level Co-occurrence matrices

    3D Convolution Neural Networks for Medical Imaging; Classification and Segmentation : A Doctor’s Third Eye

    Get PDF
    Master's thesis in Information- and communication technology (IKT591)In this thesis, we studied and developed 3D classification and segmentation models for medical imaging. The classification is done for Alzheimer’s Disease and segmentation is for brain tumor sub-regions. For the medical imaging classification task we worked towards developing a novel deep architecture which can accomplish the complex task of classifying Alzheimer’s Disease volumetrically from the MRI scans without the need of any transfer learning. The experiments were performed for both binary classification of Alzheimer’s Disease (AD) from Normal Cognitive (NC), as well as multi class classification between the three stages of Alzheimer’s called NC, AD and Mild cognitive impairment (MCI). We tested our model on the ADNI dataset and achieved mean accuracy of 94.17% and 89.14% for binary classification and multiclass classification respectively. In the second part of this thesis which is segmentation of tumors sub-regions in brain MRI images we studied some popular architecture for segmentation of medical imaging and inspired from them, proposed our architecture of end-to-end trainable fully convolutional neural net-work which uses attention block to learn the localization of different features of the multiple sub-regions of tumor. Also experiments were done to see the effect of weighted cross-entropy loss function and dice loss function on the performance of the model and the quality of the output segmented labels. The results of evaluation of our model are received through BraTS’19 dataset challenge. The model is able to achieve a dice score of 0.80 for the segmentation of whole tumor, and a dice scores of 0.639 and 0.536 for other two sub-regions within the tumor on validation data. In this thesis we successfully applied computer vision techniques for medical imaging analysis. We show the huge potential and numerous benefits of deep learning to combat and detect diseases opens up more avenues for research and application for automating medical imaging analysis

    Brain enhancement through cognitive training: A new insight from brain connectome

    Get PDF
    Owing to the recent advances in neurotechnology and the progress in understanding of brain cognitive functions, improvements of cognitive performance or acceleration of learning process with brain enhancement systems is not out of our reach anymore, on the contrary, it is a tangible target of contemporary research. Although a variety of approaches have been proposed, we will mainly focus on cognitive training interventions, in which learners repeatedly perform cognitive tasks to improve their cognitive abilities. In this review article, we propose that the learning process during the cognitive training can be facilitated by an assistive system monitoring cognitive workloads using electroencephalography (EEG) biomarkers, and the brain connectome approach can provide additional valuable biomarkers for facilitating leaners' learning processes. For the purpose, we will introduce studies on the cognitive training interventions, EEG biomarkers for cognitive workload, and human brain connectome. As cognitive overload and mental fatigue would reduce or even eliminate gains of cognitive training interventions, a real-time monitoring of cognitive workload can facilitate the learning process by flexibly adjusting difficulty levels of the training task. Moreover, cognitive training interventions should have effects on brain sub-networks, not on a single brain region, and graph theoretical network metrics quantifying topological architecture of the brain network can differentiate with respect to individual cognitive states as well as to different individuals' cognitive abilities, suggesting that the connectome is a valuable approach for tracking the learning progress. Although only a few studies have exploited the connectome approach for studying alterations of the brain network induced by cognitive training interventions so far, we believe that it would be a useful technique for capturing improvements of cognitive function

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    Machine Learning for Multiclass Classification and Prediction of Alzheimer\u27s Disease

    Get PDF
    Alzheimer\u27s disease (AD) is an irreversible neurodegenerative disorder and a common form of dementia. This research aims to develop machine learning algorithms that diagnose and predict the progression of AD from multimodal heterogonous biomarkers with a focus placed on the early diagnosis. To meet this goal, several machine learning-based methods with their unique characteristics for feature extraction and automated classification, prediction, and visualization have been developed to discern subtle progression trends and predict the trajectory of disease progression. The methodology envisioned aims to enhance both the multiclass classification accuracy and prediction outcomes by effectively modeling the interplay between the multimodal biomarkers, handle the missing data challenge, and adequately extract all the relevant features that will be fed into the machine learning framework, all in order to understand the subtle changes that happen in the different stages of the disease. This research will also investigate the notion of multitasking to discover how the two processes of multiclass classification and prediction relate to one another in terms of the features they share and whether they could learn from one another for optimizing multiclass classification and prediction accuracy. This research work also delves into predicting cognitive scores of specific tests over time, using multimodal longitudinal data. The intent is to augment our prospects for analyzing the interplay between the different multimodal features used in the input space to the predicted cognitive scores. Moreover, the power of modality fusion, kernelization, and tensorization have also been investigated to efficiently extract important features hidden in the lower-dimensional feature space without being distracted by those deemed as irrelevant. With the adage that a picture is worth a thousand words, this dissertation introduces a unique color-coded visualization system with a fully integrated machine learning model for the enhanced diagnosis and prognosis of Alzheimer\u27s disease. The incentive here is to show that through visualization, the challenges imposed by both the variability and interrelatedness of the multimodal features could be overcome. Ultimately, this form of visualization via machine learning informs on the challenges faced with multiclass classification and adds insight into the decision-making process for a diagnosis and prognosis

    Is attention all you need in medical image analysis? A review

    Full text link
    Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated

    Artificial intelligence for dementia research methods optimization

    Get PDF
    Artificial intelligence (AI) and machine learning (ML) approaches are increasingly being used in dementia research. However, several methodological challenges exist that may limit the insights we can obtain from high-dimensional data and our ability to translate these findings into improved patient outcomes. To improve reproducibility and replicability, researchers should make their well-documented code and modeling pipelines openly available. Data should also be shared where appropriate. To enhance the acceptability of models and AI-enabled systems to users, researchers should prioritize interpretable methods that provide insights into how decisions are generated. Models should be developed using multiple, diverse datasets to improve robustness, generalizability, and reduce potentially harmful bias. To improve clarity and reproducibility, researchers should adhere to reporting guidelines that are co-produced with multiple stakeholders. If these methodological challenges are overcome, AI and ML hold enormous promise for changing the landscape of dementia research and care. HIGHLIGHTS: Machine learning (ML) can improve diagnosis, prevention, and management of dementia. Inadequate reporting of ML procedures affects reproduction/replication of results. ML models built on unrepresentative datasets do not generalize to new datasets. Obligatory metrics for certain model structures and use cases have not been defined. Interpretability and trust in ML predictions are barriers to clinical translation
    corecore