36 research outputs found

    Cross-Modal Prediction in Speech Perception

    Get PDF
    Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis

    The Gender Congruency Effect across languages in bilinguals: A meta-analysis

    Get PDF
    In the study of gender representation and processing in bilinguals, two contrasting perspectives exist: integrated vs. the autonomous (Costa, Kovacic, Fedorenko, & Caramazza, 2003). In the former, cross-linguistic interactions during the selection of grammatical gender values are expected; in the latter, they are not. To address this issue, authors have typically explored the cross-linguistic Gender Congruency Effect (GCE: a facilitation on the naming or translation of second language [L2] nouns when their first language [L1] translations are of the same gender, in comparison to those of a different gender). However, the literature suggests that this effect is sometimes difficult to observe and might vary as a function of variables such as the syntactic structure produced to translate or name the target (bare nouns vs. noun phrases), the phonological gender transparency of both languages (whether or not they have phonological gender cues associated with the ending letter [e.g., “–a” for feminine words and “–o” for masculine words in Romance languages]), the degree of L2 proficiency, and task requirements (naming vs. translation). The aim of the present quantitative meta-analysis is to examine the robustness of the cross-linguistic GCE obtained during language production. It involves 25 experiments from 11 studies. The results support a bilingual gender-integrated view, in that they show a small but significant GC effect regardless of the variables mentioned above.This paper was funded through the state budget with reference IF / 00784/2013 / CP1158 / CT0013. The study has also been partially supported by the FCT and the Portuguese Ministry of Science, Technology and Higher Education through national funds and co-financed by FEDER through COMPETE2020 under the PT2020 Partnership Agreement (POCI-01-0145-FEDER-007653). Government of Spain—Ministry of Education, Culture and Sports—through the Training program for Academic Staff (Ayudas para la Formación del Profesorado Universitario, FPU grant BOE-B-2017-2646), the research project (reference PSI2015-65116-P) granted by the Spanish Ministry of Economy and Competitiveness, and the grant for research groups (reference ED431B 2019/2020) from the Galician Government, as well as by the FCT (Foundation for Science and Technology, Portugal) through the state budget (reference IF / 00784/2013 / CP1158 / CT0013). Finally, the study has also been partially supported by the FCT and the Portuguese Ministry of Science, Technology and Higher Education through national funds and co-financed by FEDER through COMPETE2020 under the PT2020 Partnership Agreement (POCI-01-0145-FEDER-007653

    Event-Related Potentials Reveal Rapid Verification of Predicted Visual Input

    Get PDF
    Human information processing depends critically on continuous predictions about upcoming events, but the temporal convergence of expectancy-based top-down and input-driven bottom-up streams is poorly understood. We show that, during reading, event-related potentials differ between exposure to highly predictable and unpredictable words no later than 90 ms after visual input. This result suggests an extremely rapid comparison of expected and incoming visual information and gives an upper temporal bound for theories of top-down and bottom-up interactions in object recognition
    corecore