460,456 research outputs found

    Interoceptive Ingredients of Body Ownership: Affective Touch and Cardiac Awareness in the Rubber Hand Illusion

    Get PDF
    This document is the Accepted Manuscript version of the following article: Laura Crucianelli, Charlotte Krahe, Paul M. Jenkinson, Aikaterini (Katerina) Fotopoulou, 'Interoceptive Ingredients of Body Ownership: Affective Touch and Cardiac Awareness in the Rubber Hand Illusion', Cortex, first published online 1 May 2017, available at doi: https://doi.org/10.1016/j.cortex.2017.04.018. © 2017 Elsevier Ltd. All rights reserved.The sense of body ownership represents a fundamental aspect of bodily self-consciousness. Using multisensory integration paradigms, recent studies have shown that both exteroceptive and interoceptive information contribute to our sense of body ownership. Interoception refers to the physiological sense of the condition of the body, including afferent signals that originate inside the body and outside the body. However, it remains unclear whether individual sensitivity to interoceptive modalities is unitary or differs between modalities. It is also unclear whether the effect of interoceptive information on body ownership is caused by exteroceptive ‘visual capture’ of these modalities, or by bottom-up processing of interoceptive information. This study aimed to test these questions in two separate samples. In the first experiment (N = 76), we examined the relationship between two different interoceptive modalities, namely cardiac awareness based on a heartbeat counting task, and affective touch perception based on stimulation of a specialized C tactile (CT) afferent system. This is an interoceptive modality of affective and social significance. In a second experiment (N = 63), we explored whether ‘off-line’ trait interoceptive sensitivity based on a heartbeat counting task would modulate the extent to which CT affective touch influences the multisensory process during the rubber hand illusion (RHI). We found that affective touch enhanced the subjective experience of body ownership during the RHI. Nevertheless, interoceptive sensitivity, as measured by a heartbeat counting task, did not modulate this effect, nor did it relate to the perception of ownership or of CT-optimal affective touch more generally. By contrast, this trait measure of interoceptive sensitivity appeared most relevant when the multisensory context of interoception was ambiguous, suggesting that the perception of interoceptive signals and their effects on body ownership may depend on individual abilities to regulate the balance of interoception and exteroception in given contexts.Peer reviewedFinal Accepted Versio

    Predictive uncertainty in auditory sequence processing

    Get PDF
    Copyright © 2014 Hansen and Pearce. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms

    Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    Get PDF
    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments

    Saliency or template? ERP evidence for long-term representation of word stress

    Get PDF
    The present study investigated the event-related brain potential (ERP) correlates of word stress processing. Previous results showed that the violation of a legal stress pattern elicited two consecutive Mismatch Negativity (MMN) components synchronized to the changes on the first and second syllable. The aim of the present study was to test whether ERPs reflect only the detection of salient features present on the syllables, or they reflect the activation of long-term stress related representations. We examined ERPs elicited by pseudowords with no lexical representation in two conditions: the standard having a legal stress patterns, and the deviant an illegal one, and the standard having an illegal stress pattern, and the deviant a legal one. We found that the deviant having an illegal stress pattern elicited two consecutive MMN components, whereas the deviant having a legal stress pattern did not elicit MMN. Moreover, pseudowords with a legal stress pattern elicited the same ERP responses irrespective of their role in the oddball sequence, i.e., if they were standards or deviants. The results suggest that stress pattern changes are processed relying on long-term representation of word stress. To account for these results, we propose that the processing of stress cues is based on language-specific, pre-lexical stress templates

    Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition

    Full text link
    This paper presents a self-supervised method for visual detection of the active speaker in a multi-person spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multi-person face-to-face interaction dataset. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System

    Listeners normalize speech for contextual speech rate even without an explicit recognition task

    No full text
    Speech can be produced at different rates. Listeners take this rate variation into account by normalizing vowel duration for contextual speech rate: An ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Whilst some have argued that this rate normalization involves low-level automatic perceptual processing, there is also evidence that it arises at higher-level cognitive processing stages, such as decision making. Prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. This study tested whether speech rate normalization can be observed without explicit decision making, using a cross-modal repetition priming paradigm. Results show that a fast precursor sentence makes an embedded ambiguous prime (/m?t/) sound (implicitly) more /a:/-like, facilitating lexical access to the long target word "maat" in a (explicit) lexical decision task. This result suggests that rate normalization is automatic, taking place even in the absence of an explicit recognition task. Thus, rate normalization is placed within the realm of everyday spoken conversation, where explicit categorization of ambiguous sounds is rare

    Production and perception of speaker-specific phonetic detail at word boundaries

    Get PDF
    Experiments show that learning about familiar voices affects speech processing in many tasks. However, most studies focus on isolated phonemes or words and do not explore which phonetic properties are learned about or retained in memory. This work investigated inter-speaker phonetic variation involving word boundaries, and its perceptual consequences. A production experiment found significant variation in the extent to which speakers used a number of acoustic properties to distinguish junctural minimal pairs e.g. 'So he diced them'—'So he'd iced them'. A perception experiment then tested intelligibility in noise of the junctural minimal pairs before and after familiarisation with a particular voice. Subjects who heard the same voice during testing as during the familiarisation period showed significantly more improvement in identification of words and syllable constituents around word boundaries than those who heard different voices. These data support the view that perceptual learning about the particular pronunciations associated with individual speakers helps listeners to identify syllabic structure and the location of word boundaries
    • 

    corecore