2 research outputs found

    Functional and structural brain differences associated with mirror-touch synaesthesia

    Get PDF
    Observing touch is known to activate regions of the somatosensory cortex but the interpretation of this finding is controversial (e.g. does it reflect the simulated action of touching or the simulated reception of touch?). For most people, observing touch is not linked to reported experiences of feeling touch but in some people it is (mirror-touch synaesthetes). We conducted an fMRI study in which participants (mirror-touch synaesthetes, controls) watched movies of stimuli (face, dummy, object) being touched or approached. In addition we examined whether mirror touch synaesthesia is associated with local changes of grey and white matter volume in the brain using VBM (voxel-based morphometry). Both synaesthetes and controls activated the somatosensory system (primary and secondary somatosensory cortices, SI and SII) when viewing touch, and the same regions were activated (by a separate localiser) when feeling touch — i.e. there is a mirror system for touch. However, when comparing the two groups, we found evidence that SII seems to play a particular important role in mirror-touch synaesthesia: in synaesthetes, but not in controls, posterior SII was active for watching touch to a face (in addition to SI and posterior temporal lobe); activity in SII correlated with subjective intensity measures of mirror-touch synaesthesia (taken outside the scanner), and we observed an increase in grey matter volume within the SII of the synaesthetes' brains. In addition, the synaesthetes showed hypo-activity when watching touch to a dummy in posterior SII. We conclude that the secondary somatosensory cortex has a key role in this form of synaesthesia

    Development and characterization of deep learning techniques for neuroimaging data

    Get PDF
    Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools
    corecore