1,103 research outputs found

    MEG Evidence for Incremental Sentence Composition in the Anterior Temporal Lobe

    Full text link
    Research investigating the brain basis of language comprehension has associated the left anterior temporal lobe (ATL) with sentence‐level combinatorics. Using magnetoencephalography (MEG), we test the parsing strategy implemented in this brain region. The number of incremental parse steps from a predictive left‐corner parsing strategy that is supported by psycholinguistic research is compared with those from a less‐predictive strategy. We test for a correlation between parse steps and source‐localized MEG activity recorded while participants read a story. Left‐corner parse steps correlated with activity in the left ATL around 350–500 ms after word onset. No other correlations specific to sentence comprehension were observed. These data indicate that the left ATL engages in combinatoric processing that is well characterized by a predictive left‐corner parsing strategy.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/137231/1/cogs12445-sup-0001-AppendixS1.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137231/2/cogs12445.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137231/3/cogs12445_am.pd

    Naturalistic Sentence Comprehension in the Brain

    Full text link
    The cognitive neuroscience of language relies largely on controlled experiments that are different from the everyday situations in which we use language. This review describes an approach that studies specific aspects of sentence comprehension in the brain using data collected while participants perform an everyday task, such as listening to a story. The approach uses ‘neuro‐computational’ models that are based on linguistic and psycholinguistic theories. These models quantify how a specific computation, such as identifying a syntactic constituent, might be carried out by a neural circuit word‐by‐word. Model predictions are tested for their statistical fit with measured brain data. The paper discusses three applications of this approach: (i) to probe the location and timing of linguistic processing in the brain without requiring unnatural tasks and stimuli, (ii) to test theoretical hypotheses by comparing the fits of different models to naturalistic data, and (iii) to study neural mechanisms for language processing in populations that are poorly served by traditional methods.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/133583/1/lnc312198.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/133583/2/lnc312198_am.pd

    Lexical Predictability during Natural Reading: Effects of Surprisal and Entropy Reduction

    Get PDF
    What are the effects of word‐by‐word predictability on sentence processing times during the natural reading of a text? Although information complexity metrics such as surprisal and entropy reduction have been useful in addressing this question, these metrics tend to be estimated using computational language models, which require some degree of commitment to a particular theory of language processing. Taking a different approach, this study implemented a large‐scale cumulative cloze task to collect word‐by‐word predictability data for 40 passages and compute surprisal and entropy reduction values in a theory‐neutral manner. A separate group of participants read the same texts while their eye movements were recorded. Results showed that increases in surprisal and entropy reduction were both associated with increases in reading times. Furthermore, these effects did not depend on the global difficulty of the text. The findings suggest that surprisal and entropy reduction independently contribute to variation in reading times, as these metrics seem to capture different aspects of lexical predictability

    Relative clauses as a benchmark for Minimalist parsing

    Full text link
    corecore