16 research outputs found

    Neuronal interactions between mentalizing and action systems during indirect request processing

    Get PDF
    Human communication relies on the ability to process linguistic structure and to map words and utterances onto our environment. Furthermore, as what we communicate is often not directly encoded in our language (e.g., in the case of irony, jokes, or indirect requests), we need to extract additional cues to infer the beliefs and desires of our conversational partners. Although the functional interplay between language and the ability to mentalize has been discussed in theoretical accounts in the past, the neurobiological underpinnings of these dynamics are currently not well understood. Here, we address this issue using functional imaging (fMRI). Participants listened to question-reply dialogues. In these dialogues, a reply is interpreted as a direct reply, an indirect reply, or a request for action, depending on the question. We show that inferring meaning from indirect replies engages parts of the mentalizing network (mPFC) while requests for action also activate the cortical motor system (IPL). Subsequent connectivity analysis using Dynamic Causal Modelling (DCM) revealed that this pattern of activation is best explained by an increase in effective connectivity from the mentalizing network (mPFC) to the action system (IPL). These results are an important step towards a more integrative understanding of the neurobiological basis of indirect speech processing

    Cross-modal integration of lexical-semantic features during word processing: evidence from oscillatory dynamics during EEG.

    No full text
    In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny  =  visual features) or different modalities (e.g., silver, loud  =  visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4-6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL

    Matching of the experimental items.

    No full text
    <p>Scores were averaged over all items in each condition. P-values were computed using independent-samples t-tests. The standard error of the mean is provided in brackets.</p

    Experimental design of the dual property verification paradigm.

    No full text
    <p>A The top panel provides an overview of the design in which a target was either paired with a cross-modal (visual-haptic [VH; HV], visual-auditory [VA; AV], auditory-haptic [AH; HA]), or modality-specific feature pair (Visual [V], Auditory [A], Haptic [H]). The three modalities of interest were visual, haptic, and auditory. B The bottom panel depicts the time course of a single trial. All words are presented one after the other. Therefore, features can only be fully integrated when the target appears (e.g., <i>WHISTLE</i>).</p

    Cross-modal integration costs in verification times.

    No full text
    <p>Bar graphs depict the mean verification time in the MS (Visual, Auditory, and Haptic), and CM condition (Visual-Auditory, Auditory-Haptic, Visual-Haptic). Error bars denote standard error of the mean (*** <i>p</i><.001; ** <i>p</i><.01).</p

    Source reconstruction and connectivity analysis.

    No full text
    <p>A Source reconstruction of the effect in the theta band, depicted as thresholded z-values, reveals peaks in left ATL and MOG. B Bar graphs show a significant increase in the number of connections between ATL and the rest of the brain in the early time window (0–500 ms). In the late time window (500–1000 ms), only the CM condition shows a significant increase in the number of connections relative to baseline. Error bars depict SEM C Results of the whole-brain connectivity analysis, seeded in the ATL (white dot). Connectivity maps show the difference in absolute, z-transformed, imaginary coherence between each condition and the baseline. In the early time window both conditions show a strong increase in connectivity between the ATL and a widespread cortical network. In the second time window, only the cross-modal condition shows continuing network activity above baseline.</p

    Modulation in low frequency cortical oscillations for the target word in a cross-modal or modality-specific context.

    No full text
    <p>A The top panel shows time-frequency representations, averaged over all significant clusters. The first two panels show the grand average percent signal change with respect to the baseline. The third panel depicts the masked statistical difference between the two conditions in t-values. The contour plot reveals one significant cluster in the theta range (4–6 Hz). B The first two bottom panels depict the topography of the effect in each condition (4–6 Hz, peak at 750–850 ms) relative to baseline. The third panel signifies the statistical difference between conditions in t-values. Electrodes within significant clusters are marked with dots (p = .002, cluster-corrected)</p

    Time-frequency plots for each of the 6 ROI.

    No full text
    <p>The ROI were middle anterior (MA), left anterior (LA), right anterior (RA), middle posterior (MP), left posterior (LP), and right posterior (RP) electrodes. Time-frequency representations depict the statistical difference in t-values for the target word in the CM versus MS feature context. The contours indicate the peak of the cluster-corrected statistical difference (p = .002).</p

    Mean of the modality ratings for visual, haptic, and auditory features.

    No full text
    <p>The three spider plots indicate the mean rating score <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0101042#pone.0101042-VanDantzig1" target="_blank">[29]</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0101042#pone.0101042-Lynott1" target="_blank">[30]</a> over all features in the each of the three modalities of interest (Visual, Haptic, and Auditory).</p

    Functional selectivity for face processing in the temporal voice area of early deaf individuals

    No full text
    Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g.,person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magnetoencephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in borndeaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions
    corecore