48 research outputs found

    Conflicting constraints in resource-adaptive language comprehension

    Get PDF
    The primary goal of psycholinguistic research is to understand the architectures and mechanisms that underlie human language comprehension and production. This entails an understanding of how linguistic knowledge is represented and organized in the brain and a theory of how that knowledge is accessed when we use language. Research has traditionally emphasized purely linguistic aspects of on-line comprehension, such as the influence of lexical, syntactic, semantic and discourse constraints, and their tim -course. It has become increasingly clear, however, that nonlinguistic information, such as the visual environment, are also actively exploited by situated language comprehenders

    Different mechanisms for role relations versus verb-action congruence effects: Evidence from ERPs in picture-sentence verification

    Get PDF
    Knoeferle P, Urbach TP, Kutas M. Different mechanisms for role relations versus verb-action congruence effects: Evidence from ERPs in picture-sentence verification. Acta Psychologica. 2014;152:133-148.Extant accounts of visually situated language processing do make general predictions about visual context effects on incremental sentence comprehension; these, however, are not sufficiently detailed to accommodate potentially different visual context effects (such as a scene-sentence mismatch based on actions versus thematic role relations, e.g., (Altmann & Kamide, 2007; Knoeferle & Crocker, 2007; Taylor & Zwaan, 2008; Zwaan & Radvansky, 1998)). To provide additional data for theory testing and development, we collected event-related brain potentials (ERPs) as participants read a subject-verb-object sentence (500 ms SOA in Experiment 1 and 300 ms SOA in Experiment 2), and post-sentence verification times indicating whether or not the verb and/or the thematic role relations matched a preceding picture (depicting two participants engaged in an action). Though incrementally processed, these two types of mismatch yielded different ERP effects. Role-relation mismatch effects emerged at the subject noun as anterior negativities to the mismatching noun, preceding action mismatch effects manifest as centro-parietal N400s greater to the mismatching verb, regardless of SOAs. These two types of mismatch manipulations also yielded different effects post-verbally, correlated differently with a participant's mean accuracy, verbal working memory and visual-spatial scores, and differed in their interactions with SOA. Taken together these results clearly implicate more than a single mismatch mechanism for extant accounts of picture-sentence processing to accommodate. (C) 2014 Elsevier B.V. All rights reserved

    Grounding Word Learning in Space

    Get PDF
    Humans and objects, and thus social interactions about objects, exist within space. Words direct listeners' attention to specific regions of space. Thus, a strong correspondence exists between where one looks, one's bodily orientation, and what one sees. This leads to further correspondence with what one remembers. Here, we present data suggesting that children use associations between space and objects and space and words to link words and objects—space binds labels to their referents. We tested this claim in four experiments, showing that the spatial consistency of where objects are presented affects children's word learning. Next, we demonstrate that a process model that grounds word learning in the known neural dynamics of spatial attention, spatial memory, and associative learning can capture the suite of results reported here. This model also predicts that space is special, a prediction supported in a fifth experiment that shows children do not use color as a cue to bind words and objects. In a final experiment, we ask whether spatial consistency affects word learning in naturalistic word learning contexts. Children of parents who spontaneously keep objects in a consistent spatial location during naming interactions learn words more effectively. Together, the model and data show that space is a powerful tool that can effectively ground word learning in social contexts

    Integrating Mechanisms of Visual Guidance in Naturalistic Language Production

    Get PDF
    Situated language production requires the integration of visual attention and lin-guistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate percep-tual (scene clutter) and conceptual guidance (cue animacy), and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of lan-guage production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of atten-tional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan-pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention

    The interaction of visual and linguistic saliency during syntactic ambiguity resolution

    Get PDF
    Psycholinguistic research using the visual world paradigm has shown that the pro-cessing of sentences is constrained by the visual context in which they occur. Re-cently, there has been growing interest on the interactions observed when both lan-guage and vision provide relevant information during sentence processing. In three visual world experiments on syntactic ambiguity resolution, we investigate how vi-sual and linguistic information influence the interpretation of ambiguous sentences. We hypothesize that (1) visual and linguistic information both constrain which in-terpretation is pursued by the sentence processor, and (2) the two types of informa-tion act upon the interpretation of the sentence at different points during processing. In Experiment 1, we show that visual saliency is utilized to anticipate the upcoming arguments of a verb. In Experiment 2, we operationalize linguistic saliency using intonational breaks and demonstrate that these give prominence to linguistic refer-ents. These results confirm prediction (1). In Experiment 3, we manipulate visual and linguistic saliency together and find that both types of information are used, but at different points in the sentence, to incrementally update its current interpre-tation. This finding is consistent with prediction (2). Overall, our results suggest an adaptive processing architecture in which different types of information are used when they become available, optimizing different aspects of situated language pro-cessing
    corecore