61 research outputs found

    Phonetic recalibration does not depend on working memory

    Get PDF
    Listeners use lipread information to adjust the phonetic boundary between two speech categories (phonetic recalibration, Bertelson et al. 2003). Here, we examined phonetic recalibration while listeners were engaged in a visuospatial or verbal memory working memory task under different memory load conditions. Phonetic recalibration was—like selective speech adaptation—not affected by a concurrent verbal or visuospatial memory task. This result indicates that phonetic recalibration is a low-level process not critically depending on processes used in verbal- or visuospatial working memory

    The multimodal trust effects of face, voice, and sentence content

    Get PDF
    Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.</p

    No “Self” Advantage for Audiovisual Speech Aftereffects

    Get PDF
    Published: 22 March 2019.Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory “self” advantages. We assessed whether there is a “self” advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a “self” advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.This work was supported by the Severo Ochoa program grant SEV-2015-049 awarded to the BCBL. MB and MP were supported by the Spanish Ministry of Economy and Competitiveness (MINECO, grant PSI2014-51874-P), and MB was also supported by the Netherlands Organization for Scientific Research (NWO, VENI grant 275-89-027)

    No “Self” Advantage for Audiovisual Speech Aftereffects

    Get PDF
    Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory “self” advantages. We assessed whether there is a “self” advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a “self” advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal

    Semantic priming effects can be modulated by crosslinguistic interactions during second-language auditory word recognition

    Get PDF
    Published online by Cambridge University Press: 24 February 2020The current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions and semantic priming. Phonological overlap between a word and its translation equivalent (phonological cognate status), and semantic relatedness of a preceding prime were manipulated. Experiment 1 examined word recognition performance in noisy listening conditions that introduce a high degree of uncertainty, whereas Experiment 2 employed clear listening conditions, with low uncertainty. Under noisy listening conditions, semantic priming effects interacted with phonological cognate status: for word recognition accuracy, a related prime overcame inhibitory effects of phonological overlap between target words and their translations. These findings are consistent with models of bilingual word recognition that incorporate crosslinguistic phonological-lexical-semantic interactions. Moreover, they suggest an interplay between L2-L1 interactions and the integration of information across acoustic and semantic levels of processing in flexibly mapping the speech signal onto the spoken words, under adverse listening conditions.This research was funded by the Spanish Ministry of Science and Innovation (Grant PSI2017-82563-P, awarded to A.G.S.), the Netherlands Organization for Scientific research (NWO Veni grant 275-89-027, awarded to M.B.), the Basque Government through the BERC 2018-2021 program, and the Spanish State Agency Severo Ochoa excellence accreditation SEV-2015-0490; Programme for Centres/Units of Excellence (awarded to the BCBL), and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 799554

    Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    Get PDF
    Accepted manuscript online: 4 October 2017Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli.MB was supported by the Spanish Ministry of Economy and Competitiveness (MINECO grant FPDI-2013-15661) and the Netherlands Organization for Scientific Research (NWO VENI grant 275-89-027)

    Fluidity in the perception of auditory speech: Cross-modal recalibration of voice gender and vowel identity by a talking face

    Get PDF
    Article first published online: January 13, 2020Humans quickly adapt to variations in the speech signal. Adaptation may surface as recalibration, a learning effect driven by error-minimisation between a visual face and an ambiguous auditory speech signal, or as selective adaptation, a contrastive aftereffect driven by the acoustic clarity of the sound. Here, we examined whether these aftereffects occur for vowel identity and voice gender. Participants were exposed to male, female, or androgynous tokens of speakers pronouncing /e/, /ø/, (embedded in words with a consonant-vowel-consonant structure), or an ambiguous vowel halfway between /e/ and /ø/ dubbed onto the video of a male or female speaker pronouncing /e/ or /ø/. For both voice gender and vowel identity, we found assimilative aftereffects after exposure to auditory ambiguous adapter sounds, and contrastive aftereffects after exposure to auditory clear adapter sounds. This demonstrates that similar principles for adaptation in these dimensions are at play.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by Gravitation Grant 024.001.006 of the Language in Interaction Consortium from Netherlands Organization for Scientific Research. The third author was supported by The Netherlands Organization for Scientific Research (NWO: VENI Grant 275-89-027)

    Implementing EEG hyperscanning setups.

    Get PDF
    Hyperscanning refers to obtaining simultaneous neural recordings from more than one person (Montage et al., 2002 [1]), that can be used to study interactive situations. In particular, hyperscanning with Electroencephalography (EEG) is becoming increasingly popular since it allows researchers to explore the interactive brain with a high temporal resolution. Notably, there is a 40-year gap between the first instance that simultaneous measurement of EEG activity was mentioned in the literature (Duane and Behrendt, 1965 [2]), and the first actual description of an EEG hyperscanning setup being implemented (Babiloni et al., 2006 [3]). To date, specific EEG hyperscanning devices have not yet been developed and EEG hyperscanning setups are not usually described with sufficient detail to be easily reproduced. Here, we offer a step-by-step description of solutions to many of these technological challenges. Specifically, we describe and provide customized implementations of EEG hyperscanning setups using hardware and software from different companies: Brain Products, ANT, EGI, and BioSemi. •Necessary details to set up a functioning EEG hyperscanning protocol are provided.•The setups allow independent measures and measures of synchronization between the signals of two different brains.•Individual electrical Ground and Reference is obtained in all discussed systems
    corecore