10,157 research outputs found

    Feasible reactivity in a synchronous pi-calculus

    Get PDF
    Reactivity is an essential property of a synchronous program. Informally, it guarantees that at each instant the program fed with an input will `react' producing an output. In the present work, we consider a refined property that we call ` feasible reactivity'. Beyond reactivity, this property guarantees that at each instant both the size of the program and its reaction time are bounded by a polynomial in the size of the parameters at the beginning of the computation and the size of the largest input. We propose a method to annotate programs and we develop related static analysis techniques that guarantee feasible reactivity for programs expressed in the S-pi-calculus. The latter is a synchronous version of the pi-calculus based on the SL synchronous programming model

    The forgotten monoid

    Full text link
    We study properties of the forgotten monoid which appeared in work of Lascoux and Schutzenberger and recently resurfaced in the construction of dual equivalence graphs by Assaf. In particular, we provide an explicit characterization of the forgotten classes in terms of inversion numbers and show that there are n^2-3n+4 forgotten classes in the symmetric group S_n. Each forgotten class contains a canonical element that can be characterized by pattern avoidance. We also show that the sum of Gessel's quasi-symmetric functions over a forgotten class is a 0-1 sum of ribbon-Schur functions.Comment: 13 pages; in version 3 the proof of Proposition 3 is correcte

    Frequency vs. Association for Constraint Selection in Usage-Based Construction Grammar

    Get PDF
    A usage-based Construction Grammar (CxG) posits that slot-constraints generalize from common exemplar constructions. But what is the best model of constraint generalization? This paper evaluates competing frequency-based and association-based models across eight languages using a metric derived from the Minimum Description Length paradigm. The experiments show that association-based models produce better generalizations across all languages by a significant margin

    The effect of informational load on disfluencies in interpreting: a corpus-based regression analysis

    Get PDF
    This article attempts to measure the cognitive or informational load in interpreting by modelling the occurrence rate of the speech disfluency uh(m). In a corpus of 107 interpreted and 240 non-interpreted texts, informational load is operationalized in terms of four measures: delivery rate, lexical density, percentage of numerals, and average sentence length. The occurrence rate of the indicated speech disfluency was modelled using a rate model. Interpreted texts are analyzed based on the interpreter's output and compared with the input of non-interpreted texts, and measure the effect of source text features. The results demonstrate that interpreters produce significantly more uh(m) s than non-interpreters and that this difference is mainly due to the effect of lexical density on the output side. The main source predictor of uh(m) s in the target text was shown to be the delivery rate of the source text. On a more general level of significance, the second analysis also revealed an increasing effect of the numerals in the source texts and a decreasing effect of the numerals in the target texts

    Principles and Implementation of Deductive Parsing

    Get PDF
    We present a system for generating parsers based directly on the metaphor of parsing as deduction. Parsing algorithms can be represented directly as deduction systems, and a single deduction engine can interpret such deduction systems so as to implement the corresponding parser. The method generalizes easily to parsers for augmented phrase structure formalisms, such as definite-clause grammars and other logic grammar formalisms, and has been used for rapid prototyping of parsing algorithms for a variety of formalisms including variants of tree-adjoining grammars, categorial grammars, and lexicalized context-free grammars.Comment: 69 pages, includes full Prolog cod
    • …
    corecore