5,792 research outputs found

    Generating similar images using bag context picture grammars

    Get PDF
    A Dissertation submitted to the Faculty of Science in partial fulļ¬lment of the requirements for the degree of Master of Science University of the Witwatersrand, Johannesburg, February 2018Formal language theory was born in the middle of the 20th century as a tool for modeling and investigating syntax of natural languages. It was developed in connection with the handling of programming languages. Bag context grammars are a fairly new grammar class where bag context tree grammars have been deļ¬ned. Bag context is used to regulate rewriting in tree grammars. In this dissertation we use bag context to regulate rewriting in picture grammars and thus to generate similar pictures. This work is exploratory work since bag context picture grammars have not been deļ¬ned. We use examples to show how bag context picture grammars can be used to generate pictures. In this work bag context picture grammars are deļ¬ned and used to generate similar pictures. Pictures generated by random context picture grammars and three of their sub-classes are selected and bag context picture grammars are used to generate the same pictures to those selected. A lemma is deļ¬ned that is used to convert the class of random context picture grammars and three of their sub-classes into equivalent bag context picture grammars. For each grammar selected, an equivalent bag context picture grammar is created and used to generate several pictures that are similar to each other. Similarity is deļ¬ned by noting small diļ¬€erences that are seen in pictures that belong to the same gallery. In this work we generate similar pictures with bag context picture grammars and thus make the discovery that bag context gives a certain level of control in terms of rules applied in a grammar.XL201

    Search and Result Presentation in Scientific Workflow Repositories

    Get PDF
    We study the problem of searching a repository of complex hierarchical workflows whose component modules, both composite and atomic, have been annotated with keywords. Since keyword search does not use the graph structure of a workflow, we develop a model of workflows using context-free bag grammars. We then give efficient polynomial-time algorithms that, given a workflow and a keyword query, determine whether some execution of the workflow matches the query. Based on these algorithms we develop a search and ranking solution that efficiently retrieves the top-k grammars from a repository. Finally, we propose a novel result presentation method for grammars matching a keyword query, based on representative parse-trees. The effectiveness of our approach is validated through an extensive experimental evaluation

    Attribute Multiset Grammars for Global Explanations of Activities

    Get PDF

    Data-Oriented Language Processing. An Overview

    Full text link
    During the last few years, a new approach to language processing has started to emerge, which has become known under various labels such as "data-oriented parsing", "corpus-based interpretation", and "tree-bank grammar" (cf. van den Berg et al. 1994; Bod 1992-96; Bod et al. 1996a/b; Bonnema 1996; Charniak 1996a/b; Goodman 1996; Kaplan 1996; Rajman 1995a/b; Scha 1990-92; Sekine & Grishman 1995; Sima'an et al. 1994; Sima'an 1995-96; Tugwell 1995). This approach, which we will call "data-oriented processing" or "DOP", embodies the assumption that human language perception and production works with representations of concrete past language experiences, rather than with abstract linguistic rules. The models that instantiate this approach therefore maintain large corpora of linguistic representations of previously occurring utterances. When processing a new input utterance, analyses of this utterance are constructed by combining fragments from the corpus; the occurrence-frequencies of the fragments are used to estimate which analysis is the most probable one. In this paper we give an in-depth discussion of a data-oriented processing model which employs a corpus of labelled phrase-structure trees. Then we review some other models that instantiate the DOP approach. Many of these models also employ labelled phrase-structure trees, but use different criteria for extracting fragments from the corpus or employ different disambiguation strategies (Bod 1996b; Charniak 1996a/b; Goodman 1996; Rajman 1995a/b; Sekine & Grishman 1995; Sima'an 1995-96); other models use richer formalisms for their corpus annotations (van den Berg et al. 1994; Bod et al., 1996a/b; Bonnema 1996; Kaplan 1996; Tugwell 1995).Comment: 34 pages, Postscrip

    Probabilistic parsing

    Get PDF
    Postprin
    • ā€¦
    corecore