755 research outputs found
Left Recursion in Parsing Expression Grammars
Parsing Expression Grammars (PEGs) are a formalism that can describe all
deterministic context-free languages through a set of rules that specify a
top-down parser for some language. PEGs are easy to use, and there are
efficient implementations of PEG libraries in several programming languages.
A frequently missed feature of PEGs is left recursion, which is commonly used
in Context-Free Grammars (CFGs) to encode left-associative operations. We
present a simple conservative extension to the semantics of PEGs that gives
useful meaning to direct and indirect left-recursive rules, and show that our
extensions make it easy to express left-recursive idioms from CFGs in PEGs,
with similar results. We prove the conservativeness of these extensions, and
also prove that they work with any left-recursive PEG.
PEGs can also be compiled to programs in a low-level parsing machine. We
present an extension to the semantics of the operations of this parsing machine
that let it interpret left-recursive PEGs, and prove that this extension is
correct with regards to our semantics for left-recursive PEGs.Comment: Extended version of the paper "Left Recursion in Parsing Expression
Grammars", that was published on 2012 Brazilian Symposium on Programming
Language
Precedence Automata and Languages
Operator precedence grammars define a classical Boolean and deterministic
context-free family (called Floyd languages or FLs). FLs have been shown to
strictly include the well-known visibly pushdown languages, and enjoy the same
nice closure properties. We introduce here Floyd automata, an equivalent
operational formalism for defining FLs. This also permits to extend the class
to deal with infinite strings to perform for instance model checking.Comment: Extended version of the paper which appeared in Proceedings of CSR
2011, Lecture Notes in Computer Science, vol. 6651, pp. 291-304, 2011.
Theorem 1 has been corrected and a complete proof is given in Appendi
Parallel parsing made practical
The property of local parsability allows to parse inputs through inspecting only a bounded-length string around the current token. This in turn enables the construction of a scalable, data-parallel parsing algorithm, which is presented in this work. Such an algorithm is easily amenable to be automatically generated via a parser generator tool, which was realized, and is also presented in the following. Furthermore, to complete the framework of a parallel input analysis, a parallel scanner can also combined with the parser. To prove the practicality of a parallel lexing and parsing approach, we report the results of the adaptation of JSON and Lua to a form fit for parallel parsing (i.e. an operator-precedence grammar) through simple grammar changes and scanning transformations. The approach is validated with performance figures from both high performance and embedded multicore platforms, obtained analyzing real-world inputs as a test-bench. The results show that our approach matches or dominates the performances of production-grade LR parsers in sequential execution, and achieves significant speedups and good scaling on multi-core machines. The work is concluded by a broad and critical survey of the past work on parallel parsing and future directions on the integration with semantic analysis and incremental parsing
Practical LR Parser Generation
Parsing is a fundamental building block in modern compilers, and for
industrial programming languages, it is a surprisingly involved task. There are
known approaches to generate parsers automatically, but the prevailing
consensus is that automatic parser generation is not practical for real
programming languages: LR/LALR parsers are considered to be far too restrictive
in the grammars they support, and LR parsers are often considered too
inefficient in practice. As a result, virtually all modern languages use
recursive-descent parsers written by hand, a lengthy and error-prone process
that dramatically increases the barrier to new programming language
development.
In this work we demonstrate that, contrary to the prevailing consensus, we
can have the best of both worlds: for a very general, practical class of
grammars -- a strict superset of Knuth's canonical LR -- we can generate
parsers automatically, and the resulting parser code, as well as the generation
procedure itself, is highly efficient. This advance relies on several new
ideas, including novel automata optimization procedures; a new grammar
transformation ("CPS"); per-symbol attributes; recursive-descent actions; and
an extension of canonical LR parsing, which we refer to as XLR, which endows
shift/reduce parsers with the power of bounded nondeterministic choice.
With these ingredients, we can automatically generate efficient parsers for
virtually all programming languages that are intuitively easy to parse -- a
claim we support experimentally, by implementing the new algorithms in a new
software tool called langcc, and running them on syntax specifications for
Golang 1.17.8 and Python 3.9.12. The tool handles both languages automatically,
and the generated code, when run on standard codebases, is 1.2x faster than the
corresponding hand-written parser for Golang, and 4.3x faster than the CPython
parser, respectively
- …