9,607 research outputs found

    A Fuzzy Approach to Erroneous Inputs in Context-Free Language Recognition

    Get PDF
    Using fuzzy context-free grammars one can easily describe a finite number of ways to derive incorrect strings together with their degree of correctness. However, in general there is an infinite number of ways to perform a certain task wrongly. In this paper we introduce a generalization of fuzzy context-free grammars, the so-called fuzzy context-free KK-grammars, to model the situation of making a finite choice out of an infinity of possible grammatical errors during each context-free derivation step. Under minor assumptions on the parameter KK this model happens to be a very general framework to describe correctly as well as erroneously derived sentences by a single generating mechanism. Our first result characterizes the generating capacity of these fuzzy context-free KK-grammars. As consequences we obtain: (i) bounds on modeling grammatical errors within the framework of fuzzy context-free grammars, and (ii) the fact that the family of languages generated by fuzzy context-free KK-grammars shares closure properties very similar to those of the family of ordinary context-free languages. The second part of the paper is devoted to a few algorithms to recognize fuzzy context-free languages: viz. a variant of a functional version of Cocke-Younger- Kasami's algorithm and some recursive descent algorithms. These algorithms turn out to be robust in some very elementary sense and they can easily be extended to corresponding parsing algorithms

    Grammars with two-sided contexts

    Full text link
    In a recent paper (M. Barash, A. Okhotin, "Defining contexts in context-free grammars", LATA 2012), the authors introduced an extension of the context-free grammars equipped with an operator for referring to the left context of the substring being defined. This paper proposes a more general model, in which context specifications may be two-sided, that is, both the left and the right contexts can be specified by the corresponding operators. The paper gives the definitions and establishes the basic theory of such grammars, leading to a normal form and a parsing algorithm working in time O(n^4), where n is the length of the input string.Comment: In Proceedings AFL 2014, arXiv:1405.527

    Towards Robustness in Parsing - Fuzzifying Context-Free Language Recognition

    Get PDF
    We discuss the concept of robustness with respect to parsing a context-free language. Our approach is based on the notions of fuzzy language, (generalized) fuzzy context-free grammar and parser / recognizer for fuzzy languages. As concrete examples we consider a robust version of Cocke-Younger-Kasami's algorithm and a robust kind of recursive descent recognizer

    Parallel on-line parsing in constant time per word

    Get PDF
    An on-line parser processes each word as soon as it is typed by the user, without waiting for the end of the sentence. Thus, in an interactive system, a sentence will be parsed almost immediately after the last word has been presented.\ud \ud The complexity of an on-line parser is determined by the resources needed for the analysis of a single word, as it is assumed that previous words have been processed already. Sequential parsing algorithms like CYK or Earley need O(n2) time for the nth word. A parallel implementation in O(n) time on O(n) processors is straightforward. In this paper a novel parallel on-line parser is presented that needs O(1) time on O(n2) processors

    Probabilistic Parsing Strategies

    Full text link
    We present new results on the relation between purely symbolic context-free parsing strategies and their probabilistic counter-parts. Such parsing strategies are seen as constructions of push-down devices from grammars. We show that preservation of probability distribution is possible under two conditions, viz. the correct-prefix property and the property of strong predictiveness. These results generalize existing results in the literature that were obtained by considering parsing strategies in isolation. From our general results we also derive negative results on so-called generalized LR parsing.Comment: 36 pages, 1 figur

    Computation of distances for regular and context-free probabilistic languages

    Get PDF
    Several mathematical distances between probabilistic languages have been investigated in the literature, motivated by applications in language modeling, computational biology, syntactic pattern matching and machine learning. In most cases, only pairs of probabilistic regular languages were considered. In this paper we extend the previous results to pairs of languages generated by a probabilistic context-free grammar and a probabilistic finite automaton.PostprintPeer reviewe

    An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities

    Full text link
    We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.Comment: 45 pages. Slightly shortened version to appear in Computational Linguistics 2

    Principles and Implementation of Deductive Parsing

    Get PDF
    We present a system for generating parsers based directly on the metaphor of parsing as deduction. Parsing algorithms can be represented directly as deduction systems, and a single deduction engine can interpret such deduction systems so as to implement the corresponding parser. The method generalizes easily to parsers for augmented phrase structure formalisms, such as definite-clause grammars and other logic grammar formalisms, and has been used for rapid prototyping of parsing algorithms for a variety of formalisms including variants of tree-adjoining grammars, categorial grammars, and lexicalized context-free grammars.Comment: 69 pages, includes full Prolog cod

    Time-Bounded Controlled Bidirectional Grammars

    Get PDF
    We study regularly controlled bidirectional (RCB) grammars from the viewpoint of time-bounded grammars. RCB-grammars are context-free grammars of which the rules can be used in a productive and in a reductive fashion, while the application of these rules is controlled by a regular language. Several modes of derivation can be distinguished for this kind of grammar. A time-bound on such a grammar is a measure of its derivational complexity. For some families of time bounds and for some modes of derivation we establish closure properties and a normal form theorem. In addition parsing algorithms are given for some modes of derivation. We conclude with considering generalizations with respect to the family of control languages and the family of bounding functions
    • …
    corecore