378 research outputs found

    Proceedings of the 3rd Workshop on Domain-Specific Language Design and Implementation (DSLDI 2015)

    Full text link
    The goal of the DSLDI workshop is to bring together researchers and practitioners interested in sharing ideas on how DSLs should be designed, implemented, supported by tools, and applied in realistic application contexts. We are both interested in discovering how already known domains such as graph processing or machine learning can be best supported by DSLs, but also in exploring new domains that could be targeted by DSLs. More generally, we are interested in building a community that can drive forward the development of modern DSLs. These informal post-proceedings contain the submitted talk abstracts to the 3rd DSLDI workshop (DSLDI'15), and a summary of the panel discussion on Language Composition

    The Rascal Language Workbench

    Get PDF
    Rascal is a programming language for source code analysis and transformation. This means that typically the input of a Rascal program is a program in some programming language, and the output is often yet another program. So Rascal is a meta programming language. Source code is thus primary object of manipulation in Rascal. Many of the use cases that Rascal is designed to address, follow the Extract-Analyze- SYnthesize, or EASY paradigm (shown in Figure 1.1). Meta programs often start by extracting information (facts) from the input program. This is the extraction phase. An example could be the call-graph of a program. Then, this extracted information is often subject to analysis: derived facts are computed, the information is enriched. For the call graph, a simple analysis is determining the root or leaf routines in the a source program by analysing the extracted call-graph. Another analysis could be concerned by identifying routines that are never called (dead code). Finally, the meta program will synthesize some kind of result. This can be transformed source code (e.g., removal of dead code from the input program), a report (e.g., statistics on the number of root and leaf routines), or a visualization (e.g., a graphical depiction of the call-graph). Of course, these phases are not strictly sequential: there may be feedback loops. Some analysis leads to new extraction, synthesis of a result may lead to new analyses and so on. Rascal has elaborated features to support each of the phases of the EASY paradigm fully integrated in the language. Naturally, the implementation of domain specific languages (DSLs), or more generally, modeldriven engineering (MDE) fits the EASY paradigm very well. When implementing a DSL compiler or interpreter the input is, of course, DSL source code. Extraction could, for instance, include the derivation of an AST from the concrete syntax tree. Another extracted model could be a graph-like structure representing the input in a more abstract way, or a performance model. Such abstractions are input to analyses such as constraint checking or type checking, verification, quality-of-service analysis etc. Finally, synthesis covers tasks such as graphical visualization, code generation, and optimization. To conclude, in the context of Rascal, we see DSL implementation as an instance of source code analysis and transformation

    Composing configurable Java components

    Get PDF
    This paper presents techniques to reason about the composition of configurable components and to automatically derive consistent compositions. The reasoning is achieved by describing components in a formal component description language, that allows the description of component variability, dependencies and configuration actions. It also enables the automatic, configuration-driven, derivation of product instances. To illustrate the approach we instantiate the abstract component model for Java components (packages

    Generating an IDE using Rascal

    Get PDF

    Backtracking Incremental Continuous Integration

    Get PDF
    Failing integration builds are show stoppers. Development activity is stalled because developers have to wait with integrating new changes until the problem is fixed and a successful build has been run. We show how backtracking can be used to mitigate the impact of build failures in the context of component-based software development. This way, even in the face of failure, development may continue and a working version is always available
    corecore