3,019 research outputs found

    The neurocognitive gains of diagnostic reasoning training using simulated interactive veterinary cases

    Get PDF
    The present longitudinal study ascertained training-associated transformations in the neural underpinnings of diagnostic reasoning, using a simulation game named “Equine Virtual Farm” (EVF). Twenty participants underwent structural, EVF/task-based and resting-state MRI and diffusion tensor imaging (DTI) before and after completing their training on diagnosing simulated veterinary cases. Comparing playing veterinarian versus seeing a colorful image across training sessions revealed the transition of brain activity from scientific creativity regions pre-training (left middle frontal and temporal gyrus) to insight problem-solving regions post-training (right cerebellum, middle cingulate and medial superior gyrus and left postcentral gyrus). Further, applying linear mixed-effects modelling on graph centrality metrics revealed the central roles of the creative semantic (inferior frontal, middle frontal and angular gyrus and parahippocampus) and reward systems (orbital gyrus, nucleus accumbens and putamen) in driving pre-training diagnostic reasoning; whereas, regions implicated in inductive reasoning (superior temporal and medial postcentral gyrus and parahippocampus) were the main post-training hubs. Lastly, resting-state and DTI analysis revealed post-training effects within the occipitotemporal semantic processing region. Altogether, these results suggest that simulation-based training transforms diagnostic reasoning in novices from regions implicated in creative semantic processing to regions implicated in improvised rule-based problem-solving

    Modeling and inference of multisubject fMRI data

    Get PDF
    Functional magnetic resonance imaging (fMRI) is a rapidly growing technique for studying the brain in action. Since its creation [1], [2], cognitive scientists have been using fMRI to understand how we remember, manipulate, and act on information in our environment. Working with magnetic resonance physicists, statisticians, and engineers, these scientists are pushing the frontiers of knowledge of how the human brain works. The design and analysis of single-subject fMRI studies has been well described. For example, [3], chapters 10 and 11 of [4], and chapters 11 and 14 of [5] all give accessible overviews of fMRI methods for one subject. In contrast, while the appropriate manner to analyze a group of subjects has been the topic of several recent papers, we do not feel it has been covered well in introductory texts and review papers. Therefore, in this article, we bring together old and new work on so-called group modeling of fMRI data using a consistent notation to make the methods more accessible and comparable

    Neuroconductor: an R platform for medical imaging analysis

    Get PDF
    Neuroconductor (https://neuroconductor.org) is an open-source platform for rapid testing and dissemination of reproducible computational imaging software. The goals of the project are to: (i) provide a centralized repository of R software dedicated to image analysis, (ii) disseminate software updates quickly, (iii) train a large, diverse community of scientists using detailed tutorials and short courses, (iv) increase software quality via automatic and manual quality controls, and (v) promote reproducibility of image data analysis. Based on the programming language R (https://www.r-project.org/), Neuroconductor starts with 51 inter-operable packages that cover multiple areas of imaging including visualization, data processing and storage, and statistical inference. Neuroconductor accepts new R package submissions, which are subject to a formal review and continuous automated testing. We provide a description of the purpose of Neuroconductor and the user and developer experience

    A comparison of magnetic resonance imaging and neuropsychological examination in the diagnostic distinction of Alzheimer’s disease and behavioral variant frontotemporal dementia

    Get PDF
    The clinical distinction between Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) remains challenging and largely dependent on the experience of the clinician. This study investigates whether objective machine learning algorithms using supportive neuroimaging and neuropsychological clinical features can aid the distinction between both diseases. Retrospective neuroimaging and neuropsychological data of 166 participants (54 AD; 55 bvFTD; 57 healthy controls) was analyzed via a NaĂŻve Bayes classification model. A subgroup of patients (n = 22) had pathologically-confirmed diagnoses. Results show that a combination of gray matter atrophy and neuropsychological features allowed a correct classification of 61.47% of cases at clinical presentation. More importantly, there was a clear dissociation between imaging and neuropsychological features, with the latter having the greater diagnostic accuracy (respectively 51.38 vs. 62.39%). These findings indicate that, at presentation, machine learning classification of bvFTD and AD is mostly based on cognitive and not imaging features. This clearly highlights the urgent need to develop better biomarkers for both diseases, but also emphasizes the value of machine learning in determining the predictive diagnostic features in neurodegeneration

    Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Get PDF
    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well

    Quantitative Intensity Harmonization of Dopamine Transporter SPECT Images Using Gamma Mixture Models

    Get PDF
    PURPOSE: Differences in site, device, and/or settings may cause large variations in the intensity profile of dopamine transporter (DAT) single-photon emission computed tomography (SPECT) images. However, the current standard to evaluate these images, the striatal binding ratio (SBR), does not efficiently account for this heterogeneity and the assessment can be unequivalent across distinct acquisition pipelines. In this work, we present a voxel-based automated approach to intensity normalize such type of data that improves on cross-session interpretation. PROCEDURES: The normalization method consists of a reparametrization of the voxel values based on the cumulative density function (CDF) of a Gamma distribution modeling the specific region intensity. The harmonization ability was tested in 1342 SPECT images from the PPMI repository, acquired with 7 distinct gamma camera models and at 24 different sites. We compared the striatal quantification across distinct cameras for raw intensities, SBR values, and after applying the Gamma CDF (GDCF) harmonization. As a proof-of-concept, we evaluated the impact of GCDF normalization in a classification task between controls and Parkinson disease patients. RESULTS: Raw striatal intensities and SBR values presented significant differences across distinct camera models. We demonstrate that GCDF normalization efficiently alleviated these differences in striatal quantification and with values constrained to a fixed interval [0, 1]. Also, our method allowed a fully automated image assessment that provided maximal classification ability, given by an area under the curve (AUC) of AUC = 0.94 when used mean regional variables and AUC = 0.98 when used voxel-based variables. CONCLUSION: The GCDF normalization method is useful to standardize the intensity of DAT SPECT images in an automated fashion and enables the development of unbiased algorithms using multicenter datasets. This method may constitute a key pre-processing step in the analysis of this type of images.Instituto de Salud Carlos III FI14/00497 MV15/00034Fondo Europeo de Desarrollo Regional FI14/00497 MV15/00034ISCIII-FEDER PI16/01575Wellcome Trust UK Strategic Award 098369/Z/12/ZNetherland Organization for Scientific Research NWO-Vidi 864-12-00
    • …
    corecore