2 research outputs found

    The information rate and other parameters of probabilistic context free grammars and their parsers

    Get PDF
    Probabilistic context-free languages are defined by giving predetermined probabilities (preprobabilities) for the choices that their grammars make when generating. Chapter 1 shows how to carry out the above definition, and how to calculate some parameters or the language; for instance: average length or work, mean square length, digraph probabilities, entropy. Chapter 2 introduces generating ffunctions related to grammars. It uses them to derive a condition for which preprobabilities give rise to well-fformed probability spaces. Two ffunctions, the length and entropy generating ffunctions are studied in detail. They are algebraic ffunctions, can in general only be defined implicitly, but can be used to give unified explicit methods or calculating all the parameters or chapter I (and more). Chapter 3 defines and shows how to calculate the information rate or a language. As a by-blow, Macmillan's theorem is extended (for a small class or processes) to an analogue or the Central Limit Theorem. Chapter 4 tries to compare the efficiencies or different parsing algorithms. In a reasonable sense, all deterministic parsers take equal average time to parse, any backtracking parser is slower, but there is no general algorithm for calculating the speed or a backtracking parser
    corecore