4,622 research outputs found

    Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing

    Full text link
    Is the human language understander a collection of modular processes operating with relative autonomy, or is it a single integrated process? This ongoing debate has polarized the language processing community, with two fundamentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowledge sources to explain one body of data, and the other proposes a model with a single processor and a homogeneous, monolithic knowledge source to explain the other body of data. In this paper we argue that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we demonstrate the feasibility of this approach with the computational model called COMPERE. We believe that this approach brings the language processing community significantly closer to offering human-like language processing systems.Comment: 7 pages, uses aaai.sty macr

    Uniform Representations for Syntax-Semantics Arbitration

    Get PDF
    Psychological investigations have led to considerable insight into the working of the human language comprehension system. In this article, we look at a set of principles derived from psychological findings to argue for a particular organization of linguistic knowledge along with a particular processing strategy and present a computational model of sentence processing based on those principles. Many studies have shown that human sentence comprehension is an incremental and interactive process in which semantic and other higher-level information interacts with syntactic information to make informed commitments as early as possible at a local ambiguity. Early commitments may be made by using top-down guidance from knowledge of different types, each of which must be applicable independently of others. Further evidence from studies of error recovery and delayed decisions points toward an arbitration mechanism for combining syntactic and semantic information in resolving ambiguities. In order to account for all of the above, we propose that all types of linguistic knowledge must be represented in a common form but must be separable so that they can be applied independently of each other and integrated at processing time by the arbitrator. We present such a uniform representation and a computational model called COMPERE based on the representation and the processing strategy.Comment: 7 pages, uses cogsci94.sty macr

    Robust Processing of Natural Language

    Full text link
    Previous approaches to robustness in natural language processing usually treat deviant input by relaxing grammatical constraints whenever a successful analysis cannot be provided by ``normal'' means. This schema implies, that error detection always comes prior to error handling, a behaviour which hardly can compete with its human model, where many erroneous situations are treated without even noticing them. The paper analyses the necessary preconditions for achieving a higher degree of robustness in natural language processing and suggests a quite different approach based on a procedure for structural disambiguation. It not only offers the possibility to cope with robustness issues in a more natural way but eventually might be suited to accommodate quite different aspects of robust behaviour within a single framework.Comment: 16 pages, LaTeX, uses pstricks.sty, pstricks.tex, pstricks.pro, pst-node.sty, pst-node.tex, pst-node.pro. To appear in: Proc. KI-95, 19th German Conference on Artificial Intelligence, Bielefeld (Germany), Lecture Notes in Computer Science, Springer 199

    The brain is a prediction machine that cares about good and bad - Any implications for neuropragmatics?

    Get PDF
    Experimental pragmatics asks how people construct contextualized meaning in communication. So what does it mean for this field to add neuroas a prefix to its name? After analyzing the options for any subfield of cognitive science, I argue that neuropragmatics can and occasionally should go beyond the instrumental use of EEG or fMRI and beyond mapping classic theoretical distinctions onto Brodmann areas. In particular, if experimental pragmatics ‘goes neuro’, it should take into account that the brain evolved as a control system that helps its bearer negotiate a highly complex, rapidly changing and often not so friendly environment. In this context, the ability to predict current unknowns, and to rapidly tell good from bad, are essential ingredients of processing. Using insights from non-linguistic areas of cognitive neuroscience as well as from EEG research on utterance comprehension, I argue that for a balanced development of experimental pragmatics, these two characteristics of the brain cannot be ignored

    Linguistic Variation from Cognitive Variability: The Case of English \u27Have\u27

    Get PDF
    In this dissertation, I seek to construct a model of meaning variation built upon variability in linguistic structure, conceptual structure, and cognitive makeup, and in doing so, exemplify an approach to studying meaning that is both linguistically principled and neuropsychologically grounded. As my test case, I make use of the English lexical item ‘have\u27 by proposing a novel analysis of its meaning based on its well-described variability in English and its embed- ding into crosslinguistically consistent patterns of variation and change.I support this analysis by investigating its real-time comprehension patterns through behavioral, electropsychophysiological, and hemodynamic brain data, thereby incorporating dimensions of domain-general cognitive variability as crucial determinants of linguistic variability. Per my account, ‘have\u27 retrieves a generalized relational meaning which can give rise to a conceptually constrained range of readings, depending on the degree of causality perceived from either linguistic or contextual cues. Results show that comprehenders can make use of both for ‘have\u27-sentences, though they vary in the degree to which they rely on each.At the very broadest level, the findings support a model in which the semantic distribution of ‘have\u27 is inherently principled due to a unified conceptual structure. This underlying conceptual structure and relevant context cooperate in guiding comprehension by modulating the salience of potential readings, as comprehension unfolds; though, this ability to use relevant context–context-sensitivity–is variable but systematic across comprehenders. These linguistic and cognitive factors together form the core of normal language processing and, with a gradient conceptual framework, the minimal infrastructure for meaning variation and change

    Linguistic variation from cognitive variability: the case of English \u27have\u27

    Get PDF
    In this dissertation, I seek to construct a model of meaning variation built upon variability in linguistic structure, conceptual structure, and cognitive makeup, and in doing so, exemplify an approach to studying meaning that is both linguistically principled and neuropsychologically grounded. As my test case, I make use of the English lexical item \u27have\u27 by proposing a novel analysis of its meaning based on its well-described variability in English and its embedding into crosslinguistically consistent patterns of variation and change. I support this analysis by investigating its real-time comprehension patterns through behavioral, electropsychophysiological, and hemodynamic brain data, thereby incorporating dimensions of domain-general cognitive variability as crucial determinants of linguistic variability. Per my account, \u27have\u27 retrieves a generalized relational meaning which can give rise to a conceptually constrained range of readings, depending on the degree of causality perceived from either linguistic or contextual cues. Results show that comprehenders can make use of both for \u27have\u27-sentences, though they vary in the degree to which they rely on each. At the very broadest level, the findings support a model in which the semantic distribution of \u27have\u27 is inherently principled due to a unified conceptual structure. This underlying conceptual structure and relevant context cooperate in guiding comprehension by modulating the salience of potential readings, as comprehension unfolds; though, this ability to use relevant context--context-sensitivity--is variable but systematic across comprehenders. These linguistic and cognitive factors together form the core of normal language processing and, with a gradient conceptual framework, the minimal infrastructure for meaning variation and change
    • …
    corecore