6 research outputs found

    Differential diagnosis of neurodegenerative dementias with the explainable MRI based machine learning algorithm MUQUBIA

    Get PDF
    Biomarker-based differential diagnosis of the most common forms of dementia is becoming increasingly important. Machine learning (ML) may be able to address this challenge. The aim of this study was to develop and interpret a ML algorithm capable of differentiating Alzheimer's dementia, frontotemporal dementia, dementia with Lewy bodies and cognitively normal control subjects based on sociodemographic, clinical, and magnetic resonance imaging (MRI) variables. 506 subjects from 5 databases were included. MRI images were processed with FreeSurfer, LPA, and TRACULA to obtain brain volumes and thicknesses, white matter lesions and diffusion metrics. MRI metrics were used in conjunction with clinical and demographic data to perform differential diagnosis based on a Support Vector Machine model called MUQUBIA (Multimodal Quantification of Brain whIte matter biomArkers). Age, gender, Clinical Dementia Rating (CDR) Dementia Staging Instrument, and 19 imaging features formed the best set of discriminative features. The predictive model performed with an overall Area Under the Curve of 98%, high overall precision (88%), recall (88%), and F1 scores (88%) in the test group, and good Label Ranking Average Precision score (0.95) in a subset of neuropathologically assessed patients. The results of MUQUBIA were explained by the SHapley Additive exPlanations (SHAP) method. The MUQUBIA algorithm successfully classified various dementias with good performance using cost-effective clinical and MRI information, and with independent validation, has the potential to assist physicians in their clinical diagnosis

    Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media

    Get PDF
    Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of ‘virtual’ and ‘augmented’ contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media

    Deep Learning Can Differentiate IDH-Mutant from IDH-Wild GBM

    Full text link
    Isocitrate dehydrogenase (IDH) mutant and wildtype glioblastoma multiforme (GBM) often show overlapping features on magnetic resonance imaging (MRI), representing a diagnostic challenge. Deep learning showed promising results for IDH identification in mixed low/high grade glioma populations; however, a GBM-specific model is still lacking in the literature. Our aim was to develop a GBM-tailored deep-learning model for IDH prediction by applying convoluted neural networks (CNN) on multiparametric MRI. We selected 100 adult patients with pathologically demonstrated WHO grade IV gliomas and IDH testing. MRI sequences included: MPRAGE, T1, T2, FLAIR, rCBV and ADC. The model consisted of a 4-block 2D CNN, applied to each MRI sequence. Probability of IDH mutation was obtained from the last dense layer of a softmax activation function. Model performance was evaluated in the test cohort considering categorical cross-entropy loss (CCEL) and accuracy. Calculated performance was: rCBV (accuracy 83%, CCEL 0.64), T1 (accuracy 77%, CCEL 1.4), FLAIR (accuracy 77%, CCEL 1.98), T2 (accuracy 67%, CCEL 2.41), MPRAGE (accuracy 66%, CCEL 2.55). Lower performance was achieved on ADC maps. We present a GBM-specific deep-learning model for IDH mutation prediction, with a maximal accuracy of 83% on rCBV maps. Highest predictivity achieved on perfusion images possibly reflects the known link between IDH and neoangiogenesis through the hypoxia inducible factor

    Sex Differences in Autism Spectrum Disorder: Diagnostic, Neurobiological, and Behavioral Features

    Full text link
    Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder with a worldwide prevalence of about 1%, characterized by impairments in social interaction, communication, repetitive patterns of behaviors, and can be associated with hyper- or hypo-reactivity of sensory stimulation and cognitive disability. ASD comorbid features include internalizing and externalizing symptoms such as anxiety, depression, hyperactivity, and attention problems. The precise etiology of ASD is still unknown and it is undoubted that the disorder is linked to some extent to both genetic and environmental factors. It is also well-documented and known that one of the most striking and consistent finding in ASD is the higher prevalence in males compared to females, with around 70% of ASD cases described being males. The present review looked into the most significant studies that attempted to investigate differences in ASD males and females thus trying to shade some light on the peculiar characteristics of this prevalence in terms of diagnosis, imaging, major autistic-like behavior and sex-dependent uniqueness. The study also discussed sex differences found in animal models of ASD, to provide a possible explanation of the neurological mechanisms underpinning the different presentation of autistic symptoms in males and females

    3D CT-Inclusive Deep-Learning Model to Predict Mortality, ICU Admittance, and Intubation in COVID-19 Patients

    Full text link
    Chest CT is a useful initial exam in patients with coronavirus disease 2019 (COVID-19) for assessing lung damage. AI-powered predictive models could be useful to better allocate resources in the midst of the pandemic. Our aim was to build a deep-learning (DL) model for COVID-19 outcome prediction inclusive of 3D chest CT images acquired at hospital admission. This retrospective multicentric study included 1051 patients (mean age 69, SD = 15) who presented to the emergency department of three different institutions between 20th March 2020 and 20th January 2021 with COVID-19 confirmed by real-time reverse transcriptase polymerase chain reaction (RT-PCR). Chest CT at hospital admission were evaluated by a 3D residual neural network algorithm. Training, internal validation, and external validation groups included 608, 153, and 290 patients, respectively. Images, clinical, and laboratory data were fed into different customizations of a dense neural network to choose the best performing architecture for the prediction of mortality, intubation, and intensive care unit (ICU) admission. The AI model tested on CT and clinical features displayed accuracy, sensitivity, specificity, and ROC-AUC, respectively, of 91.7%, 90.5%, 92.4%, and 95% for the prediction of patient's mortality; 91.3%, 91.5%, 89.8%, and 95% for intubation; and 89.6%, 90.2%, 86.5%, and 94% for ICU admission (internal validation) in the testing cohort. The performance was lower in the validation cohort for mortality (71.7%, 55.6%, 74.8%, 72%), intubation (72.6%, 74.7%, 45.7%, 64%), and ICU admission (74.7%, 77%, 46%, 70%) prediction. The addition of the available laboratory data led to an increase in sensitivity for patient's mortality (66%) and specificity for intubation and ICU admission (50%, 52%, respectively), while the other metrics maintained similar performance results. We present a deep-learning model to predict mortality, ICU admittance, and intubation in COVID-19 patients
    corecore