350 research outputs found

    A Domain-Specific Language and Editor for Parallel Particle Methods

    Full text link
    Domain-specific languages (DSLs) are of increasing importance in scientific high-performance computing to reduce development costs, raise the level of abstraction and, thus, ease scientific programming. However, designing and implementing DSLs is not an easy task, as it requires knowledge of the application domain and experience in language engineering and compilers. Consequently, many DSLs follow a weak approach using macros or text generators, which lack many of the features that make a DSL a comfortable for programmers. Some of these features---e.g., syntax highlighting, type inference, error reporting, and code completion---are easily provided by language workbenches, which combine language engineering techniques and tools in a common ecosystem. In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL and development environment for numerical simulations based on particle methods and hybrid particle-mesh methods. PPME uses the meta programming system (MPS), a projectional language workbench. PPME is the successor of the Parallel Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional implementation strategies. We analyze and compare both languages and demonstrate how the programmer's experience can be improved using static analyses and projectional editing. Furthermore, we present an explicit domain model for particle abstractions and the first formal type system for particle methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25, 201

    Use of domain-specific language in test automation

    Get PDF
    The primary aim of this research project was to investigate techniques to replace the complicated process of testing embedded systems in automotive domain. The multi-component domain was composed of different hardware to be used in testing procedure which increased the level of difficulty in testing for an operator. As a result, an existing semi-automated testing procedure was replaced by more simpler and efficient framework (ViBATA). A key step taken in this scenario was the replacement of manual GUI interface with the scriptable one to enhance the automation. This was achieved by building a Domain-specific language which allowed test definition in the form of human readable scripts which could be stored for later use. A DSL is a scripting language defined for a particular domain with compact expressiveness. In this case the domain is testing embedded systems in general and automotive systems in particular. The final product was a test case specification document in the form of XML as an output of generated code from this DSL which will be input to ViBATA to make test specification component automated. In this research a comparative analysis of existing DSLs for alternative domains and investigation of their applicability to the presented domain was also performed. The technologies used in this project are Xtext to define the DSL grammar, Xtend to generate code in Java and Simple framework to generate output in XML. The stages involved in DSL development and how these stages were implemented is covered in this thesis. The developed DSL for this domain is tested for automotive and calculator systems in this thesis which proved that this is more general and flexible. The DSL is consistent, efficient and automated test specification component of testing framework in embedded systems

    Comparative study of DSL tools.

    Get PDF
    An increasingly wide range of tools based on different approaches are being used to implement Domain Specific Languages (DSLs), yet there is little agreement as to which approach is, or approaches are, the most appropriate for any given problem. We believe this can in large part be explained by the lack of understanding within the DSL community. In this paper we aim to increase the understanding of the relative strengths and weaknesses of three approaches by implementing a common DSL case study. In addition, we present a comparative study of the three approaches

    Domain Specific Language for Magnetic Measurements at CERN

    Get PDF
    CERN, the European Organization for Nuclear Research, is one of the world’s largest and most respected centres for scientific research. Founded in 1954, the CERN Laboratory sits astride the Franco–Swiss border near Geneva. It was one of Europe’s first joint ventures and now has 20 Member States. Its main purpose is fundamental research in partcle physics, namely investigating what the Universe is made of and how it works. At CERN, the design and realization of the new particle accelerator, the Large Hadron Collider (LHC), has required a remarkable technological effort in many areas of engineering. In particular, the tests of LHC superconducting magnets disclosed new horizons to magnetic measurements. At CERN, the objectively large R&D effort of the Technolgy Department/Magnets, Superconductors and Cryostats (TE/MSC) group identified areas where further work is required in order to assist the LHC commissioning and start-up, to provide continuity in the instrumentation for the LHC magnets maintenance, and to achieve more accurate magnet models for the LHC exploitation. In view of future projects, a wide range of software requirements has been recently satisfied by the Flexible Framework for Magnetic Measurements (FFMM), designed also for integrating more performing flexible hardware. FFMM software applications control several devices, such as encoder boards, digital integrators, motor controllers, transducers. In addition, they synchronize and coordinate different measurement tasks and actions

    Towards a Universal Variability Language: Master's Thesis

    Get PDF
    While feature diagrams have become the de facto standard to graphically describe variability models in Software Product Line Engineering (SPLE), none of the many textual notations have gained widespread adoption. However, a common textual language would be beneficial for better collaboration and exchange between tools. The main goal of this thesis is to propose a language for this purpose, along with fundamental tool support. The language should meet the needs and preferences of the community, so it can attain acceptance and adoption, without becoming yet another variability language. Its guiding principles are simplicity, familiarity, and flexibility. These enable the language to be easy to learn and to integrate into different tools, while still being expressive enough to represent existing and future models. We incorporate general design principles for Domain-Specific Languages (DSLs), discuss usage scenarios collected by the community, analyze existing languages, and gather feedback directly through questionnaires submitted to the community. In the initial questionnaire, the community was in disagreement on whether to use nesting or references to represent the hierarchy. Thus, we presented two proposals to be compared side by side. Of those, the community clearly prefers the one using nesting, as determined by a second questionnaire. We call that proposal the Universal Variability Language (UVL). The community awards good ratings to this language, deems it suitable for teaching and learning, and estimates that it can represent most of the existing models. Evaluations reconsidering the requirements show that it enables the relevant scenarios and can support the editing of large-scale real-world feature models, such as the Linux kernel. We provide a small default library that can be used in Java, containing a parser and a printer for the language. We integrated it into the variability tool FeatureIDE, demonstrating its utility in quickly adding support for the proposed language. Overall, we can conclude that UVL is well-suited for a base language level of a universal textual variability language. Along with the acquired insights into the requirements for such a language, it can pose as the basis for the SPLE community to commit to a common language. As exchange and collaboration would be simplified, higher-quality research could be conducted and better tools developed, serving the whole community

    Extensible Languages for Flexible and Principled Domain Abstraction

    Get PDF
    Die meisten Programmiersprachen werden als Universalsprachen entworfen. UnabhĂ€ngig von der zu entwickelnden Anwendung, stellen sie die gleichen Sprachfeatures und Sprachkonstrukte zur VerfĂŒgung. Solch universelle Sprachfeatures ignorieren jedoch die spezifischen Anforderungen, die viele Softwareprojekte mit sich bringen. Als Gegenkraft zu Universalsprachen fördern domĂ€nenspezifische Programmiersprachen, modellgetriebene Softwareentwicklung und sprachorientierte Programmierung die Verwendung von DomĂ€nenabstraktion, welche den Einsatz von domĂ€nenspezifischen Sprachfeatures und Sprachkonstrukten ermöglicht. Insbesondere erlaubt DomĂ€nenabstraktion Programmieren auf dem selben Abstraktionsniveau zu programmieren wie zu denken und vermeidet dadurch die Notwendigkeit DomĂ€nenkonzepte mit universalsprachlichen Features zu kodieren. Leider ermöglichen aktuelle AnsĂ€tze zur DomĂ€nenabstraktion nicht die Entfaltung ihres ganzen Potentials. Einerseits mangelt es den AnsĂ€tzen fĂŒr interne domĂ€nenspezifische Sprachen an FlexibilitĂ€t bezĂŒglich der Syntax, statischer Analysen, und WerkzeugunterstĂŒtzung, was das tatsĂ€chlich erreichte Abstraktionsniveau beschrĂ€nkt. Andererseits mangelt es den AnsĂ€tzen fĂŒr externe domĂ€nenspezifische Sprachen an wichtigen Prinzipien, wie beispielsweise modularem Schließen oder Komposition von DomĂ€nenabstraktionen, was die Anwendbarkeit dieser AnsĂ€tze in der Entwicklung grĂ¶ĂŸerer Softwaresysteme einschrĂ€nkt. Wir verfolgen in der vorliegenden Doktorarbeit einen neuartigen Ansatz, welcher die Vorteile von internen und externen domĂ€nenspezifischen Sprachen vereint um flexible und prinzipientreue DomĂ€nenabstraktion zu unterstĂŒtzen. Wir schlagen bibliotheksbasierte erweiterbare Programmiersprachen als Grundlage fĂŒr DomĂ€nenabstraktion vor. In einer erweiterbaren Sprache kann DomĂ€nenabstraktion durch die Erweiterung der Sprache mit domĂ€nenspezifischer Syntax, statischer Analyse, und WerkzeugunterstĂŒtzung erreicht werden . Dies ermöglicht DomĂ€nenabstraktionen die selbe FlexibilitĂ€t wie externe domĂ€nenspezifische Sprachen. Um die Einhaltung ĂŒblicher Prinzipien zu gewĂ€hrleisten, organisieren wir Spracherweiterungen als Bibliotheken und verwenden einfache Import-Anweisungen zur Aktivierung von Erweiterungen. Dies erlaubt modulares Schließen (durch die Inspektion der Import-Anweisungen), unterstĂŒtzt die Komposition von DomĂ€nenabstraktionen (durch das Importieren mehrerer Erweiterungen), und ermöglicht die uniforme Selbstanwendbarkeit von Spracherweiterungen in der Entwicklung zukĂŒnftiger Erweiterungen (durch das Importieren von Erweiterungen in einer Erweiterungsdefinition). Die Organisation von Erweiterungen in Form von Bibliotheken ermöglicht DomĂ€nenabstraktionen die selbe Prinzipientreue wie interne domĂ€nenspezifische Sprachen. Wir haben die bibliotheksbasierte erweiterbare Programmiersprache SugarJ entworfen und implementiert. SugarJ Bibliotheken können Erweiterungen der Syntax, der statischen Analyse, und der WerkzeugunterstĂŒtzung von SugarJ deklarieren. Eine syntaktische Erweiterung besteht dabei aus einer erweiterten Syntax und einer Transformation der erweiterten Syntax in die Basissyntax von SugarJ. Eine Erweiterung der Analyse testet Teile des abstrakten Syntaxbaums der aktuellen Datei und produziert eine Liste von Fehlern. Eine Erweiterung der WerkzeugunterstĂŒtzung deklariert Dienste wie SyntaxfĂ€rbung oder CodevervollstĂ€ndigung fĂŒr bestimmte Sprachkonstrukte. SugarJ Erweiterungen sind vollkommen selbstanwendbar: Eine erweiterte Syntax kann in eine Erweiterungsdefinition transformiert werden, eine erweiterte Analyse kann Erweiterungsdefinitionen testen, und eine erweiterte WerkzeugunterstĂŒtzung kann Entwicklern beim Definieren von Erweiterungen assistieren. Um eine Quelldatei mit Erweiterungen zu verarbeiten, inspizieren der SugarJ Compiler und die SugarJ IDE die importierten Bibliotheken um die aktiven Erweiterungen zu bestimmen. Der Compiler und die IDE adaptieren den Parser, den Codegenerator, die Analyseroutine und die WerkzeugunterstĂŒtzung der Quelldatei entsprechend der aktiven Erweiterungen. Wir beschreiben in der vorliegenden Doktorarbeit nicht nur das Design und die Implementierung von SugarJ, sondern berichten darĂŒber hinaus ĂŒber Erweiterungen unseres ursprĂŒnglich Designs. Insbesondere haben wir eine Generalisierung des SugarJ Compilers entworfen und implementiert, die neben Java alternative Basissprachen unterstĂŒtzt. Wir haben diese Generalisierung verwendet um die bibliotheksbasierten erweiterbaren Programmiersprachen SugarHaskell, SugarProlog, und SugarFomega zu entwickeln. Weiterhin haben wir SugarJ ergĂ€nzt um polymorphe DomĂ€nenabstraktion und KommunikationsintegritĂ€t zu unterstĂŒtzen. Polymorphe DomĂ€nenabstraktion ermöglicht Programmierern mehrere Transformationen fĂŒr die selbe domĂ€nenspezifische Syntax bereitzustellen. Dies erhöht die FlexibilitĂ€t von SugarJ und unterstĂŒtzt bekannte Szenarien aus der modellgetriebenen Entwicklung. KommunikationsintegritĂ€t spezifiziert, dass die Komponenten eines Softwaresystems nur ĂŒber explizite KanĂ€le kommunizieren dĂŒrfen. Im Kontext von Codegenerierung stellt dies eine interessante Eigenschaft dar, welche die Generierung von impliziten ModulabhĂ€ngigkeiten untersagt. Wir haben KommunikationsintegritĂ€t als weiteres Prinzip zu SugarJ hinzugefĂŒgt. Basierend auf SugarJ und zahlreicher Fallstudien argumentieren wir, dass flexible und prinzipientreue DomĂ€nenabstraktion ein skalierbares Programmiermodell fĂŒr die Entwicklung komplexer Softwaresysteme darstellt

    Contrasting dedicated model transformation languages versus general purpose languages: a historical perspective on ATL versus Java based on complexity and size

    Get PDF
    Model transformations are among the key concepts of model-driven engineering (MDE), and dedicated model transformation languages (MTLs) emerged with the popularity of the MDE pssaradigm about 15 to 20 years ago. MTLs claim to increase the ease of development of model transformations by abstracting from recurring transformation aspects and hiding complex semantics behind a simple and intuitive syntax. Nonetheless, MTLs are rarely adopted in practice, there is still no empirical evidence for the claim of easier development, and the argument of abstraction deserves a fresh look in the light of modern general purpose languages (GPLs) which have undergone a significant evolution in the last two decades. In this paper, we report about a study in which we compare the complexity and size of model transformations written in three different languages, namely (i) the Atlas Transformation Language (ATL), (ii) Java SE5 (2004–2009), and (iii) Java SE14 (2020); the Java transformations are derived from an ATL specification using a translation schema we developed for our study. In a nutshell, we found that some of the new features in Java SE14 compared to Java SE5 help to significantly reduce the complexity of transformations written in Java by as much as 45%. At the same time, however, the relative amount of complexity that stems from aspects that ATL can hide from the developer, which is about 40% of the total complexity, stays about the same. Furthermore we discovered that while transformation code in Java SE14 requires up to 25% less lines of code, the number of words written in both versions stays about the same. And while the written number of words stays about the same their distribution throughout the code changes significantly. Based on these results, we discuss the concrete advancements in newer Java versions. We also discuss to which extent new language advancements justify writing transformations in a general purpose language rather than a dedicated transformation language. We further indicate potential avenues for future research on the comparison of MTLs and GPLs in a model transformation context.UniversitĂ€t Ulm (1055)Peer Reviewe

    Generator-Composition for Aspect-Oriented Domain-Specific Languages

    Get PDF
    Software systems are complex, as they must cover a diverse set of requirements describing functionality and the environment. Software engineering addresses this complexity with Model-Driven Engineering ( MDE ). MDE utilizes different models and metamodels to specify views and aspects of a software system. Subsequently, these models must be transformed into code and other artifacts, which is performed by generators. Information systems and embedded systems are often used over decades. Over time, they must be modified and extended to fulfill new and changed requirements. These alterations can be triggered by the modeling domain and by technology changes in both the platform and programming languages. In MDE these alterations result in changes of syntax and semantics of metamodels, and subsequently of generator implementations. In MDE, generators can become complex software applications. Their complexity depends on the semantics of source and target metamodels, and the number of involved metamodels. Changes to metamodels and their semantics require generator modifications and can cause architecture and code degradation. This can result in errors in the generator, which have a negative effect on development costs and time. Furthermore, these errors can reduce quality and increase costs in projects utilizing the generator. Therefore, we propose the generator construction and evolution approach GECO, which supports decoupling of generator components and their modularization. GECO comprises three contributions: (a) a method for metamodel partitioning into views, aspects, and base models together with partitioning along semantic boundaries, (b) a generator composition approach utilizing megamodel patterns for generator fragments, which are generators depending on only one source and one target metamodel, (c) an approach to modularize fragments along metamodel semantics and fragment functionality. All three contributions together support modularization and evolvability of generators

    Role-Modeling in Round-Trip Engineering for Megamodels

    Get PDF
    Software is becoming more and more part of our daily life and makes it easier, e.g., in the areas of communication and infrastructure. Model-driven software development forms the basis for the development of software through the use and combination of different models, which serve as central artifacts in the software development process. In this respect, model-driven software development comprises the process from requirement analysis through design to software implementation. This set of models with their relationships to each other forms a so-called megamodel. Due to the overlapping of the models, inconsistencies occur between the models, which must be removed. Therefore, round-trip engineering is a mechanism for synchronizing models and is the foundation for ensuring consistency between models. Most of the current approaches in this area, however, work with outdated batch-oriented transformation mechanisms, which no longer meet the requirements of more complex, long-living, and ever-changing software. In addition, the creation of megamodels is time-consuming and complex, and they represent unmanageable constructs for a single user. The aim of this thesis is to create a megamodel by means of easy-to-learn mechanisms and to achieve its consistency by removing redundancy on the one hand and by incrementally managing consistency relationships on the other hand. In addition, views must be created on the parts of the megamodel to extract them across internal model boundaries. To achieve these goals, the role concept of KĂŒhn in 2014 is used in the context of model-driven software development, which was developed in the Research Training Group 'Role-based Software Infrastructures for continuous-context-sensitive Systems.' A contribution of this work is a role-based single underlying model approach, which enables the generation of views on heterogeneous models. Besides, an approach for the synchronization of different models has been developed, which enables the role-based single underlying model approach to be extended by new models. The combination of these two approaches creates a runtime-adaptive megamodel approach that can be used in model-driven software development. The resulting approaches will be evaluated based on an example from the literature, which covers all areas of the work. In addition, the model synchronization approach will be evaluated in connection with the Transformation Tool Contest Case from 2019
    • 

    corecore