642 research outputs found

    Accessibility of referent information influences sentence planning : An eye-tracking study

    Get PDF
    Acknowledgments We thank Phoebe Ye and Gouming Martens for help with data collection for Experiment 1 and 2, respectively. This research was supported by the European Research Council for the ERC Starting Grant (206198) to YC.Peer reviewedPublisher PD

    A psycholinguistically motivated version of TAG

    Get PDF
    We propose a psycholinguistically moti-vated version of TAG which is designed to model key properties of human sentence processing, viz., incrementality, connect-edness, and prediction. We use findings from human experiments to motivate an in-cremental grammar formalism that makes it possible to build fully connected struc-tures on a word-by-word basis. A key idea of the approach is to explicitly model the prediction of upcoming material and the subsequent verification and integration pro-cesses. We also propose a linking theory that links the predictions of our formalism to experimental data such as reading times, and illustrate how it can capture psycholin-guistic results on the processing of either... or structures and relative clauses.

    Introduction to the Special Issue on Incremental Processing in Dialogue

    Get PDF
      A brief introduction to the topics discussed in the special issue, and to the individual papers

    Towards Incremental Parsing of Natural Language using Recursive Neural Networks

    Get PDF
    In this paper we develop novel algorithmic ideas for building a natural language parser grounded upon the hypothesis of incrementality. Although widely accepted and experimentally supported under a cognitive perspective as a model of the human parser, the incrementality assumption has never been exploited for building automatic parsers of unconstrained real texts. The essentials of the hypothesis are that words are processed in a left-to-right fashion, and the syntactic structure is kept totally connected at each step. Our proposal relies on a machine learning technique for predicting the correctness of partial syntactic structures that are built during the parsing process. A recursive neural network architecture is employed for computing predictions after a training phase on examples drawn from a corpus of parsed sentences, the Penn Treebank. Our results indicate the viability of the approach andlay out the premises for a novel generation of algorithms for natural language processing which more closely model human parsing. These algorithms may prove very useful in the development of eÆcient parsers

    Incrementality and flexibility in sentence production

    Get PDF
    corecore