11 research outputs found

    Synthesising middleware components for reusable software

    Get PDF

    The State of the Art in Language Workbenches. Conclusions from the Language Workbench Challenge

    Get PDF
    Language workbenches are tools that provide high-level mechanisms for the implementation of (domain-specific) languages. Language workbenches are an active area of research that also receives many contributions from industry. To compare and discuss existing language workbenches, the annual Language Workbench Challenge was launched in 2011. Each year, participants are challenged to realize a given domain-specific language with their workbenches as a basis for discussion and comparison. In this paper, we describe the state of the art of language workbenches as observed in the previous editions of the Language Workbench Challenge. In particular, we capture the design space of language workbenches in a feature model and show where in this design space the participants of the 2013 Language Workbench Challenge reside. We compare these workbenches based on a DSL for questionnaires that was realized in all workbenches

    Bifröst: debugging web applications as a whole

    No full text
    Even though web application development is supported by professional tooling, debugging support is lacking. If one starts to debug a web application, hardly any tooling support exists. Only the core components like server processes and a web browser are exposed. Developers need to manually weave available information together into a single representation while juggling with the various separate tools available. In order to mitigate this distinction, we demonstrate a prototype debugger which combines these core components, exposing a single and unified tool to support a developer

    Scriptless GUI Testing on Mobile Applications

    No full text
    Traditionally, end-to-end testing of mobile apps is either performed manually or automated with test scripts. However, manual GUI testing is expensive and slow, and test scripts are fragile for GUI changes, resulting in high maintenance costs. Scriptless testing attempts to address the costs associated with GUI testing. Existing scriptless approaches for mobile testing do not seem to fit the requirements of the industry, specifically those of the ING. This study presents an extension to open source TESTAR tool to support scriptless GUI testing of Android and iOS applications. We present an initial validation of the tool on an industrial setting at the ING. From the validation, we determine that the extended TESTAR outperforms two other state-of-the-art scriptless testing tools for Android in terms of code coverage, and achieves similar performance as the scripted test automation already in use at the ING. Moreover, we see that the scriptless approach covers parts of the application under test that the existing test scripts did not cover, showing the complementarity of the approaches, providing more value for the testers.</p

    The State of the Art in Language Workbenches: Conclusions from the Language Workbench Challenge

    No full text
    Abstract. Language workbenches are tools that provide high-level mechanisms for the implementation of (domain-specific) languages. Language workbenches are an active area of research that also receives many contributions from industry. To compare and discuss existing language workbenches, the annual Language Workbench Challenge was launched in 2011. Each year, participants are challenged to realize a given domain-specific language with their workbenches as a basis for discussion and comparison. In this paper, we describe the state of the art of language workbenches as observed in the previous editions of the Language Workbench Challenge. In particular, we capture the design space of language workbenches in a feature model and show where in this design space the participants of the 2013 Language Workbench Challenge reside. We compare these workbenches based on a DSL for questionnaires that was realized in all workbenches

    Evaluating and comparing language workbenches. Existing results and benchmarks for the future

    Get PDF
    Language workbenches are environments for simplifying the creation and use of computer languages. The annual Language Workbench Challenge (LWC) was launched in 2011 to allow the many academic and industrial researchers in this area an opportunity to quantitatively and qualitatively compare their approaches. We first describe all four LWCs to date, before focussing on the approaches used, and results generated, during the third LWC. We give various empirical data for ten approaches from the third LWC. We present a generic feature model within which the approaches can be understood and contrasted. Finally, based on our experiences of the existing LWCs, we propose a number of benchmark problems for future LWCs
    corecore