105 research outputs found

    Eelco Visser - An Exceptional SLE Researcher

    Get PDF

    On Solving Solved Problems

    Get PDF
    Some problems are considered solved by the research community. But are they really and does that mean we should stop investigating them? In this essay, I argue that "solved" problems often only appear solved on the surface, while fundamental open research problems lurk below the surface. It requires dedicated researchers to discover those open problems by applying the existing solutions and putting them to the test

    A personal retrospective on language workbenches

    Get PDF
    Model-driven software engineering and specifically domain-specific languages have contributed to improve the quality of software and the efficiency in the development of software. However, the design and implementation of domain-specific languages requires still an enormous investment. Language workbenches are the most important tools in the field of software language engineering. The introduction of language workbenches has alleviated partly the development effort, but there are still a few major challenges that need to be tackled. This paper presents a personal perspective on the development of tools for language engineering and language workbenches in particular and future challenges to be tackled.</p

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development

    Towards Zero-Overhead Disambiguation of Deep Priority Conflicts

    Full text link
    **Context** Context-free grammars are widely used for language prototyping and implementation. They allow formalizing the syntax of domain-specific or general-purpose programming languages concisely and declaratively. However, the natural and concise way of writing a context-free grammar is often ambiguous. Therefore, grammar formalisms support extensions in the form of *declarative disambiguation rules* to specify operator precedence and associativity, solving ambiguities that are caused by the subset of the grammar that corresponds to expressions. **Inquiry** Implementing support for declarative disambiguation within a parser typically comes with one or more of the following limitations in practice: a lack of parsing performance, or a lack of modularity (i.e., disallowing the composition of grammar fragments of potentially different languages). The latter subject is generally addressed by scannerless generalized parsers. We aim to equip scannerless generalized parsers with novel disambiguation methods that are inherently performant, without compromising the concerns of modularity and language composition. **Approach** In this paper, we present a novel low-overhead implementation technique for disambiguating deep associativity and priority conflicts in scannerless generalized parsers with lightweight data-dependency. **Knowledge** Ambiguities with respect to operator precedence and associativity arise from combining the various operators of a language. While *shallow conflicts* can be resolved efficiently by one-level tree patterns, *deep conflicts* require more elaborate techniques, because they can occur arbitrarily nested in a tree. Current state-of-the-art approaches to solving deep priority conflicts come with a severe performance overhead. **Grounding** We evaluated our new approach against state-of-the-art declarative disambiguation mechanisms. By parsing a corpus of popular open-source repositories written in Java and OCaml, we found that our approach yields speedups of up to 1.73x over a grammar rewriting technique when parsing programs with deep priority conflicts--with a modest overhead of 1-2 % when parsing programs without deep conflicts. **Importance** A recent empirical study shows that deep priority conflicts are indeed wide-spread in real-world programs. The study shows that in a corpus of popular OCaml projects on Github, up to 17 % of the source files contain deep priority conflicts. However, there is no solution in the literature that addresses efficient disambiguation of deep priority conflicts, with support for modular and composable syntax definitions

    Scope Graphs: The Story so Far

    Get PDF
    Static name binding (i.e., associating references with appropriate declarations) is an essential aspect of programming languages. However, it is usually treated in an unprincipled manner, often leaving a gap between formalization and implementation. The scope graph formalism mitigates these deficiencies by providing a well-defined, first-class, language-parametric representation of name binding. Scope graphs serve as a foundation for deriving type checkers from declarative type system specifications, reasoning about type soundness, and implementing editor services and refactorings. In this paper we present an overview of scope graphs, and, using examples, show how the ideas and notation of the formalism have evolved. We also briefly discuss follow-up research beyond type checking, and evaluate the formalism

    Measuring quality of grammars for procedural level generation

    Get PDF
    Grammar-based procedural level generation raises the productivity of level designers for games such as dungeon crawl and platform games. However, the improved productivity comes at cost of level quality assurance. Authoring, improving and maintaining grammars is difficult because it is hard to predict how each grammar rule impacts the overall level quality, and tool support is lacking. We propose a novel metric called Metric of Added Detail (MAD) that indicates if a rule adds or removes detail with respect to its phase in the transformation pipeline, and Specification Analysis Reporting (SAnR) for expressing level properties and analyzing how qualities evolve in level generation histories. We demonstrate MAD and SAnR using a prototype of a level generator called Ludoscope Lite. Our preliminary results show that problematic rules tend to break SAnR properties and that MAD intuitively raises flags. MAD and SAnR augment existing approaches, and can ultimately help designers make better levels and level generators

    Type Theory as a Language Workbench

    Get PDF
    Language Workbenches offer language designers an expressive environment in which to create their Domain Specific Languages (DSLs). Similarly, research into mechanised meta-theory has shown how dependently typed languages provide expressive environments to formalise and study DSLs and their meta-theoretical properties. But can we claim that dependently typed languages qualify as language workbenches? We argue yes! We have developed an exemplar DSL called Vélo that showcases not only dependently typed techniques to realise and manipulate Intermediate Representations (IRs), but that dependently typed languages make fine language workbenches. Vélo is a simple verified language with well-typed holes and comes with a complete compiler pipeline: parser, elaborator, REPL, evaluator, and compiler passes. Specifically, we describe our design choices for well-typed IR design that includes support for well-typed holes, how CSE is achieved in a well-typed setting, and how the mechanised type-soundness proof for Vélo is the source of the evaluator
    • …
    corecore