30 research outputs found

    The role of working memory and contextual constraints in children's processing of relative clauses

    Get PDF
    An auditory sentence comprehension task investigated the extent to which the integration of contextual and structural cues was mediated by verbal memory span with 32 English-speaking 6- to 8-year old children. Spoken relative clause sentences were accompanied by visual context pictures which fully (depicting the actions described within the relative clause) or partially (depicting several referents) met the pragmatic assumptions of relativisation. Comprehension of the main and relative clauses of centre-embedded and right-branching structures was compared for each context. Pragmatically-appropriate contexts exerted a positive effect on relative clause comprehension, but children with higher memory spans demonstrated a further benefit for main clauses. Comprehension for centre-embedded main clauses was found to be very poor, independently of either context or memory span. The results suggest that children have access to adult-like linguistic processing mechanisms, and that sensitivity to extra-linguistic cues is evident in young children and develops as cognitive capacity increases

    Confidence in uncertainty: Error cost and commitment in early speech hypotheses

    Get PDF
    © 2018 Loth et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Interactions with artificial agents often lack immediacy because agents respond slower than their users expect. Automatic speech recognisers introduce this delay by analysing a user’s utterance only after it has been completed. Early, uncertain hypotheses of incremental speech recognisers can enable artificial agents to respond more timely. However, these hypotheses may change significantly with each update. Therefore, an already initiated action may turn into an error and invoke error cost. We investigated whether humans would use uncertain hypotheses for planning ahead and/or initiating their response. We designed a Ghost-in-the-Machine study in a bar scenario. A human participant controlled a bartending robot and perceived the scene only through its recognisers. The results showed that participants used uncertain hypotheses for selecting the best matching action. This is comparable to computing the utility of dialogue moves. Participants evaluated the available evidence and the error cost of their actions prior to initiating them. If the error cost was low, the participants initiated their response with only suggestive evidence. Otherwise, they waited for additional, more confident hypotheses if they still had time to do so. If there was time pressure but only little evidence, participants grounded their understanding with echo questions. These findings contribute to a psychologically plausible policy for human-robot interaction that enables artificial agents to respond more timely and socially appropriately under uncertainty

    The interaction of visual and linguistic saliency during syntactic ambiguity resolution

    Get PDF
    Psycholinguistic research using the visual world paradigm has shown that the pro-cessing of sentences is constrained by the visual context in which they occur. Re-cently, there has been growing interest on the interactions observed when both lan-guage and vision provide relevant information during sentence processing. In three visual world experiments on syntactic ambiguity resolution, we investigate how vi-sual and linguistic information influence the interpretation of ambiguous sentences. We hypothesize that (1) visual and linguistic information both constrain which in-terpretation is pursued by the sentence processor, and (2) the two types of informa-tion act upon the interpretation of the sentence at different points during processing. In Experiment 1, we show that visual saliency is utilized to anticipate the upcoming arguments of a verb. In Experiment 2, we operationalize linguistic saliency using intonational breaks and demonstrate that these give prominence to linguistic refer-ents. These results confirm prediction (1). In Experiment 3, we manipulate visual and linguistic saliency together and find that both types of information are used, but at different points in the sentence, to incrementally update its current interpre-tation. This finding is consistent with prediction (2). Overall, our results suggest an adaptive processing architecture in which different types of information are used when they become available, optimizing different aspects of situated language pro-cessing

    Cognitive control and parsing: Reexamining the role of Broca’s area in sentence comprehension

    Full text link

    A Note on the Interpretation of Negation Scope

    No full text
    corecore