1,352 research outputs found
Tracing program transformations with string origins
Program transformations play an important role in domain-specific languages and model-driven development. Tracing the execution of such transformations has well-known benefits for debugging, visualization and error reporting. In this paper we introduce string origins as a lightweight, generic and portable technique to establish a tracing relation between the textual fragments in the input and output of a program transformation. We discuss the semantics and the implementation of string origins using the Rascal meta programming language as an example. Furthermore, we illustrate the utility of string origins by presenting data structures and operations for tracing generated code, implementing protected regions, performing name resolution, and fixing inadvertent name capture in generated code
Using standard typing algorithms incrementally
Modern languages are equipped with static type checking/inference that helps programmers to keep a clean programming style and to reduce errors. However, the ever-growing size of programs and their continuous evolution require building fast and efficient analysers. A promising solution is incrementality, aiming at only re-typing the diffs, i.e. those parts of the program that change or are inserted, rather than the entire codebase. We propose an algorithmic schema that drives an incremental usage of existing, standard typing algorithms with no changes. Ours is a grey-box approach: just the shape of the input, that of the results and some domain-specific knowledge are needed to instantiate our schema. Here, we present the foundations of our approach and the conditions for its correctmess. We show it at work to derive two different incremental typing algorithms. The first type checks an imperative language to detect information flow and non-interference, and the second infers types for a functional language. We assessed our proposal on a prototypical imple- mentation of an incremental type checker. Our experiments show that using the type checker incrementally is (almost) always rewardin
Using Standard Typing Algorithms Incrementally
Modern languages are equipped with static type checking/inference that helps
programmers to keep a clean programming style and to reduce errors. However,
the ever-growing size of programs and their continuous evolution require
building fast and efficient analysers. A promising solution is incrementality,
so one only re-types those parts of the program that are new, rather than the
entire codebase. We propose an algorithmic schema driving the definition of an
incremental typing algorithm that exploits the existing, standard ones with no
changes. Ours is a grey-box approach, meaning that just the shape of the input,
that of the results and some domain-specific knowledge are needed to
instantiate our schema. Here, we present the foundations of our approach and we
show it at work to derive three different incremental typing algorithms. The
first two implement type checking and inference for a functional language. The
last one type-checks an imperative language to detect information flow and
non-interference. We assessed our proposal on a prototypical implementation of
an incremental type checker. Our experiments show that using the type checker
incrementally is (almost) always rewarding.Comment: corrected and updated; experimental results adde
One Parser to Rule Them All
Despite the long history of research in parsing, constructing parsers for real programming languages remains a difficult and painful task. In the last decades, different parser generators emerged to allow the construction of parsers from a BNF-like specification. However, still today, many parsers are handwritten, or are only partly generated, and include various hacks to deal with different peculiarities in programming languages. The main problem is that current declarative syntax definition techniques are based on pure context-free grammars, while many constructs found in programming languages require context information.
In this paper we propose a parsing framework that embraces context information in its core. Our framework is based on data-dependent grammars, which extend context-free grammars with arbitrary computation, variable binding and constraints. We present an implementation of our framework on top of the Generalized LL (GLL) parsing algorithm, and show how common idioms in syntax of programming languages such as (1) lexical disambiguation filters, (2) operator precedence, (3) indentation-sensitive rules, and (4) conditional preprocessor directives can be mapped to data-dependent grammars. We demonstrate the initial experience with our framework, by parsing more than 20000 Java, C#, Haskell, and OCaml source files
Domain Specific Languages for Managing Feature Models: Advances and Challenges
International audienceManaging multiple and complex feature models is a tedious and error-prone activity in software product line engineering. Despite many advances in formal methods and analysis techniques, the supporting tools and APIs are not easily usable together, nor unified. In this paper, we report on the development and evolution of the Familiar Domain-Specific Language (DSL). Its toolset is dedicated to the large scale management of feature models through a good support for separating concerns, composing feature models and scripting manipulations. We overview various applications of Familiar and discuss both advantages and identified drawbacks. We then devise salient challenges to improve such DSL support in the near future
OrgML - a domain specific language for organisational decision-making
Effective decision-making based on precise understanding of an organisation is critical for modern organisations to stay competitive in a dynamic and uncertain business environment. However, the state-of-the-art technologies that are relevant in this context are not adequate to capture and quantitatively analyse complex organisations. This paper discerns the necessary information for an organisational decision-making from management viewpoint, discusses inadequacy of the existing enterprise modelling and specification techniques, proposes a domain specific language to capture the necessary information in machine processable form, and demonstrates how the collected information can be used for a simulation-based evidence-driven organisational decision-making
The State of the Art in Language Workbenches. Conclusions from the Language Workbench Challenge
Language workbenches are tools that provide high-level mechanisms for the implementation of (domain-specific) languages. Language workbenches are an active area of research that also receives many contributions from industry. To compare and discuss existing language workbenches, the annual Language Workbench Challenge was launched in 2011. Each year, participants are challenged to realize a given domain-specific language with their workbenches as a basis for discussion and comparison. In this paper, we describe the state of the art of language workbenches as observed in the previous editions of the Language Workbench Challenge. In particular, we capture the design space of language workbenches in a feature model and show where in this design space the participants of the 2013 Language Workbench Challenge reside. We compare these workbenches based on a DSL for questionnaires that was realized in all workbenches
- …