292 research outputs found

    Relative clauses as a benchmark for Minimalist parsing

    Full text link

    Relative clauses as a benchmark for Minimalist parsing

    Get PDF
    Minimalist grammars have been used recently in a series of papers to explain well-known contrasts in human sentence processing in terms of subtle structural differences. These proposals combine a top-down parser with complexity metrics that relate parsing difficulty to memory usage. So far, though, there has been no large-scale exploration of the space of viable metrics. Building on this earlier work, we compare the ability of 1600 metrics to derive several processing effects observed with relative clauses, many of which have been proven difficult to unify. We show that among those 1600 candidates, a few metrics (and only a few) can provide a unified account of all these contrasts. This is a welcome result for two reasons: First, it provides a novel account of extensively studied psycholinguistic data. Second, it significantly limits the number of viable metrics that may be applied to other phenomena, thus reducing theoretical indeterminacy

    TOWARDS CHARACTERIZING INCREMENTAL STRUCTURE BUILDING DURING SENTENCE COMPREHENSION

    Get PDF
    Language comprehension involves incrementally processing sequences of words and generating expectations about upcoming words based on prior context. One of the steps involved in incremental processing is incremental structure building --- i.e., determining the relationship between the words in a sentence as the sentence unfolds. To understand how comprehenders build incremental structures, it is necessary to understand what structures comprehenders build in the first place and why. This dissertation includes three projects that tackle these what and why questions by studying incremental structure building in sentences with reduced relative clauses as a case study. The first project proposes a method for characterizing what incremental structures human comprehenders build. This method involves three steps: first, implement hypotheses from generative syntax about the abstract structure of sentences in a novel computational model; second, use the model to generate quantitative behavioral predictions; and third, test these predictions using a novel web-based experimental paradigm. Applying this approach, we compared two competing theoretical hypotheses about the structure of reduced relative clauses --- Whiz-Deletion and Participial-Phrase --- and demonstrated that the Whiz-Deletion account better characterizes the incremental structures that human comprehenders build. The second project studies why the incremental structures that comprehenders construct can change depending on the environment they are in by testing the following widely debated hypothesis: comprehenders maintain probability distributions over the structures they expect to encounter and rapidly update these distributions to match the statistics of their current environment. Based on a large-scaled reading experiment, we find evidence in support of this hypothesis, but also explain why prior work might have failed to find such support. The third project proposes a method for characterizing what incremental structures Artificial Neural Networks build when processing sentences. Applying this method, we demonstrated that the incremental structures these networks build, like the structures built by human comprehenders, is better characterized by the Whiz-Deletion account than the Participial-Phrase account. Thus, by making it possible to compare the incremental structures that these networks build to the structures that humans build, this method in turn makes it possible to test hypotheses about why humans build the structures they do. I propose several directions for future work which involve applying the methods proposed in these projects to study other phenomena beyond reduced relative clauses
    corecore