23 research outputs found

    Individual differences in subphonemic sensitivity and phonological skills

    Get PDF
    open access articleMany studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have underspecified (or “fuzzy”) phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have overspecified phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification

    The interaction of visual and linguistic saliency during syntactic ambiguity resolution

    Get PDF
    Psycholinguistic research using the visual world paradigm has shown that the pro-cessing of sentences is constrained by the visual context in which they occur. Re-cently, there has been growing interest on the interactions observed when both lan-guage and vision provide relevant information during sentence processing. In three visual world experiments on syntactic ambiguity resolution, we investigate how vi-sual and linguistic information influence the interpretation of ambiguous sentences. We hypothesize that (1) visual and linguistic information both constrain which in-terpretation is pursued by the sentence processor, and (2) the two types of informa-tion act upon the interpretation of the sentence at different points during processing. In Experiment 1, we show that visual saliency is utilized to anticipate the upcoming arguments of a verb. In Experiment 2, we operationalize linguistic saliency using intonational breaks and demonstrate that these give prominence to linguistic refer-ents. These results confirm prediction (1). In Experiment 3, we manipulate visual and linguistic saliency together and find that both types of information are used, but at different points in the sentence, to incrementally update its current interpre-tation. This finding is consistent with prediction (2). Overall, our results suggest an adaptive processing architecture in which different types of information are used when they become available, optimizing different aspects of situated language pro-cessing

    Integrating Mechanisms of Visual Guidance in Naturalistic Language Production

    Get PDF
    Situated language production requires the integration of visual attention and lin-guistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate percep-tual (scene clutter) and conceptual guidance (cue animacy), and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of lan-guage production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of atten-tional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan-pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention

    An ear and eye for language: Mechanisms underlying second language word learning

    No full text
    To become fluent in a second language, learners need to acquire a large vocabulary. However, the cognitive and affective mechanisms that support word learning, particularly among second language learners, are only beginning to be understood. Prior research has focused on intentional learning and small artificial lexicons. In the current study investigating the sources of individual variability in word learning and their underlying mechanisms, participants intentionally and incidentally learned a large vocabulary of Welsh words (i.e., emulating word learning in the wild) and completed a large battery of cognitive and affective measures. The results showed that, for both learning conditions, native language knowledge, auditory/phonological abilities and orthographic sensitivity all made unique contributions to word learning. Importantly, short-term/working memory played a significantly larger role in intentional learning. We discuss these results in the context of the mechanisms that support both native and non-native language learning. Copyright © The Author(s), 2021. Published by Cambridge University Press

    The time course of anticipatory constraint integration

    No full text
    Several studies have demonstrated that as listeners hear sentences describing events in a scene, their eye movements anticipate upcoming linguistic items predicted by the unfolding relationship between scene and sentence. While this may reflect active prediction based on structural or contextual expectations, the influence of local thematic priming between words has not been fully examined. In Experiment 1, we presented verbs (e.g., arrest) in active (Subject-Verb-Object) sentences with displays containing verb-related patients (e.g., crook) and agents (e.g., policeman). We examined patient and agent fixations following the verb, after the agent role had been filled by another entity, but prior to bottom-up specification of the object. Participants were nearly as likely to fixate agents "anticipatorily" as patients, even though the agent role was already filled. However, the patient advantage suggested simultaneous influences of both local priming and active prediction. In Experiment 2, using passive sentences (Object-Verb-Subject), we found stronger, but still graded influences of role prediction when more time elapsed between verb and target, and more syntactic cues were available. We interpret anticipatory fixations as emerging from constraint-based processes that involve both non-predictive thematic priming and active prediction

    The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill

    No full text
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Recent studies have found considerable individual variation in language comprehenders’ predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders’ prediction ability (e.g., as reflected in predictive eye movements to a WHITE CAKE on hearing “The boy will eat the white…”). Simultaneously, comprehension-based measures predicted participants’ ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible WHITE CAR). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results
    corecore