906 research outputs found

    Linear data structures for storage allocation in attribute evaluators

    Get PDF
    Practical and theoretical results have been found concerning the use of global storage allocation for the instances of applied occurrences of an attribute.\ud The practical results focus on the necessary and sufficient conditions to decide at evaluator construction time whether an evaluator can allocate the instances of an applied occurrence to a number of global variables, stackes and queues. Checking these conditions takes polynomial time for a simple multi-visit evaluator and exponential time for an absolutely non-circular evaluator.\ud The theoretical results are concerned with the data structures that are required for the global storage allocation of the instances of applied occurrences in simple multi-X evaluators, where X € {pass, sweep, visit}. For this purpose, the general class of basic linear data structures is introduced. This class of data structures can also be used to explore the theoretical possibilities and limitations of storage allocation techniques in domains other than attribute grammars

    Fast and Tiny Structural Self-Indexes for XML

    Full text link
    XML document markup is highly repetitive and therefore well compressible using dictionary-based methods such as DAGs or grammars. In the context of selectivity estimation, grammar-compressed trees were used before as synopsis for structural XPath queries. Here a fully-fledged index over such grammars is presented. The index allows to execute arbitrary tree algorithms with a slow-down that is comparable to the space improvement. More interestingly, certain algorithms execute much faster over the index (because no decompression occurs). E.g., for structural XPath count queries, evaluating over the index is faster than previous XPath implementations, often by two orders of magnitude. The index also allows to serialize XML results (including texts) faster than previous systems, by a factor of ca. 2-3. This is due to efficient copy handling of grammar repetitions, and because materialization is totally avoided. In order to compare with twig join implementations, we implemented a materializer which writes out pre-order numbers of result nodes, and show its competitiveness.Comment: 13 page

    Memoized zipper-based attribute grammars and their higher order extension

    Get PDF
    Attribute grammars are a powerfull, well-known formalism to implement and reason about programs which, by design, are conveniently modular. In this work we focus on a state of the art zipper-based embedding of classic attribute grammars and higher-order attribute grammars. We improve their execution performance through controlling attribute (re)evaluation by means of memoization techniques. We present the results of our optimizations by comparing their impact in various implementations of different, well-studied, attribute grammars and their Higher-Order extensions. (C) 2018 Elsevier B.V. All rights reserved.- (undefined

    Simple multi-visit attribute grammars

    Get PDF
    An attribute grammar is simple multi-visit if each attribute of a nonterminal has a fixed visit-number associated with it such that, during attribute evaluation, the attributes of a node which have visit-number j are computed at the jth visit to the node. An attribute grammar is l-ordered if for each nonterminal a linear order of its attributes exists such that the attributes of a node can always be evaluated in that order (cf. the work of Kastens).\ud \ud An attribute grammar is simple multi-visit if and only if it is l-ordered. Every noncircular attribute grammar can be transformed into an equivalent simple multi-visit attribute grammar which uses the same semantic operations.\ud \ud For a given distribution of visit-numbers over the attributes, it can be decided in polynomial time whether the attributes can be evaluated according to these visit-numbers. The problem whether an attribute grammar is simple multi-visit is NP-complete

    Formal Languages and Compilation

    Get PDF
    This textbook describes the essential principles and methods used for defining the syntax of artificial languages, and for designing efficient parsing algorithms and syntax-directed translators with semantic attributes. A comprehensive selection of topics is presented within a rigorous, unified framework, illustrated by numerous practical examples. Features and topics: presents a novel conceptual approach to parsing algorithms that applies to extended BNF grammars, together with a parallel parsing algorithm; supplies supplementary teaching tools, including course slides and exercises with solutions, at an associated website; unifies the concepts and notations used in different approaches, enabling an extended coverage of methods with a reduced number of definitions; systematically discusses ambiguous forms, allowing readers to avoid pitfalls when designing grammars; describes all algorithms in pseudocode, so that detailed knowledge of a specific programming language is not necessary; makes extensive usage of theoretical models of automata, transducers and formal grammars; includes concise coverage of algorithms for processing regular expressions and finite automata; and introduces static program analysis based on flow equations. This clearly-written, classroom-tested textbook is an ideal guide to the fundamentals of this field for advanced undergraduate and graduate students in computer science and computer engineering. Some background in programming is required, and readers should also be familiar with basic set theory, algebra and logic

    Introduction

    Get PDF
    This chapter will motivate why it is useful to consider the topic of derivations and filtering in more detail. We will argue against the popular belief that the minimalist program and optimality theory are incompatible theories in that the former places the explanatory burden on the generative device (the computational system) whereas the latter places it on the fi ltering device (the OT evaluator). Although this belief may be correct in as far as it describes existing tendencies, we will argue that minimalist and optimality theoretic approaches normally adopt more or less the same global architecture of grammar: both assume that a generator defines a set S of potentially well-formed expressions that can be generated on the basis of a given input and that there is an evaluator that selects the expressions from S that are actually grammatical in a given language L. For this reason, we believe that it has a high priority to investigate the role of the two components in more detail in the hope that this will provide a better understanding of the differences and similarities between the two approaches. We will conclude this introduction with a brief review of the studies collected in this book.
    corecore