5,601 research outputs found

    Syntactic Computation as Labelled Deduction: WH a case study

    Get PDF
    This paper addresses the question "Why do WH phenomena occur with the particular cluster of properties observed across languages -- long-distance dependencies, WH-in situ, partial movement constructions, reconstruction, crossover etc." These phenomena have been analysed by invoking a number of discrete principles and categories, but have so far resisted a unified treatment. The explanation proposed is set within a model of natural language understanding in context, where the task of understanding is taken to be the incremental building of a structure over which the semantic content is defined. The formal model is a composite of a labelled type-deduction system, a modal tree logic, and a set of rules for describing the process of interpreting the string as a set of transition states. A dynamic concept of syntax results, in which in addition to an output structure associated with each string (analogous to the level of LF), there is in addition an explicit meta-level description of the process whereby this incremental process takes place. This paper argues that WH-related phenomena can be unified by adopting this dynamic perspective. The main focus of the paper is on WH-initial structures, WH in situ structures, partial movement phenomena, and crossover phenomena. In each case, an analysis is proposed which emerges from the general characterisatioan of WH structures without construction-specific stipulation.Articl

    Islands in the grammar? Standards of evidence

    Get PDF
    When considering how a complex system operates, the observable behavior depends upon both architectural properties of the system and the principles governing its operation. As a simple example, the behavior of computer chess programs depends upon both the processing speed and resources of the computer and the programmed rules that determine how the computer selects its next move. Despite having very similar search techniques, a computer from the 1990s might make a move that its 1970s forerunner would overlook simply because it had more raw computational power. From the naïve observer’s perspective, however, it is not superficially evident if a particular move is dispreferred or overlooked because of computational limitations or the search strategy and decision algorithm. In the case of computers, evidence for the source of any particular behavior can ultimately be found by inspecting the code and tracking the decision process of the computer. But with the human mind, such options are not yet available. The preference for certain behaviors and the dispreference for others may theoretically follow from cognitive limitations or from task-related principles that preclude certain kinds of cognitive operations, or from some combination of the two. This uncertainty gives rise to the fundamental problem of finding evidence for one explanation over the other. Such a problem arises in the analysis of syntactic island effects – the focu

    Review. William D. Davies & Stanley Dubinsky (eds.), New horizons in the analysis of control and raising (Studies in Natural Language & Linguistic Theory 71). Dordrecht: Springer, 2007. Pp. x+347.

    No full text
    This article is a review of William D. Davies & Stanley Dubinsky's "New horizons in the analysis of control and raising" (Studies in Natural Language & Linguistic Theory 71)

    The source ambiguity problem: Distinguishing the effects of grammar and processing on acceptability judgments

    Get PDF
    Judgments of linguistic unacceptability may theoretically arise from either grammatical deviance or significant processing difficulty. Acceptability data are thus naturally ambiguous in theories that explicitly distinguish formal and functional constraints. Here, we consider this source ambiguity problem in the context of Superiority effects: the dispreference for ordering a wh-phrase in front of a syntactically “superior” wh-phrase in multiple wh-questions, e.g., What did who buy? More specifically, we consider the acceptability contrast between such examples and so-called D-linked examples, e.g., Which toys did which parents buy? Evidence from acceptability and self-paced reading experiments demonstrates that (i) judgments and processing times for Superiority violations vary in parallel, as determined by the kind of wh-phrases they contain, (ii) judgments increase with exposure, while processing times decrease, (iii) reading times are highly predictive of acceptability judgments for the same items, and (iv) the effects of the complexity of the wh-phrases combine in both acceptability judgments and reading times. This evidence supports the conclusion that D-linking effects are likely reducible to independently motivated cognitive mechanisms whose effects emerge in a wide range of sentence contexts. This in turn suggests that Superiority effects, in general, may owe their character to differential processing difficulty

    A comparative evaluation of deep and shallow approaches to the automatic detection of common grammatical errors

    Get PDF
    This paper compares a deep and a shallow processing approach to the problem of classifying a sentence as grammatically wellformed or ill-formed. The deep processing approach uses the XLE LFG parser and English grammar: two versions are presented, one which uses the XLE directly to perform the classification, and another one which uses a decision tree trained on features consisting of the XLE’s output statistics. The shallow processing approach predicts grammaticality based on n-gram frequency statistics: we present two versions, one which uses frequency thresholds and one which uses a decision tree trained on the frequencies of the rarest n-grams in the input sentence. We find that the use of a decision tree improves on the basic approach only for the deep parser-based approach. We also show that combining both the shallow and deep decision tree features is effective. Our evaluation is carried out using a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting grammatical errors into well-formed BNC sentences
    corecore