131 research outputs found

    Resources Events Agents (REA), a text DSL for OMNIA Entities

    Get PDF
    The Numbersbelieve has been developing the OMNIA platform. This is a web application platform for developing applications using Low-code principles, using Agile approaches. Modeling Entities is an application that is used on the platform to create new entities. The OMNIA Entity concept has the following properties: Agents, Commitments, Documents, Events, entities, Resources or Series. Most of these concepts are in accordance with the Resources Events Agents (REA) ontology but are not formalized. One of the goals of Numbersbelieve is a formalization of the REA concepts according to the ontology for the application that creates entities on OMNIA platform and later for other applications. REA defines an enterprise ontology developed by McCarthy (1979, 1982) has its origin in accounting database systems. Later Geerts and McCarthy (2002, 2006) extended the original model with new concepts. To formalize the concepts of the REA ontology, this research shows the development of a textual Domain-Specific Language (DSL) based on the development methodology Model Driven Engineering (MDE) which focuses software development on models. This simplifies the engineering processes as it represents the actions and behaviors of a system even before the start of the coding phase. This research is structured according to the Design Science Research Methodology (DSRM). The Design Science (DS) is a methodology for solving problems that seek to innovate by creating useful artifacts that define practices, projects and implementations and is therefore suitable for this research. This research developed three artifacts for the formalization of the DSL, a meta-model the abstract syntax, a textual language the concrete syntax and a Json file for interaction with OMNIA. The first phase of DSRM was to identify the problem that was mentioned above. The following focuses on the identification of requirements which identified the REA concepts to be included in the meta-model and textual language. Subsequently, the development of the artifacts and the editor of the language. The editor allows use cases, provided by the Numbersbelieve team, to be defined with the DSL language, correct faults and improve the language. The results were evaluated according the objectives and requirements, all successfully completed. Based on the analysis of the artifacts, the use of the language and the interaction with the OMNIA platform, through the Json file, it is concluded that the use of the DSL language is suitable to interact with the OMNIA platform through the Application Program Interface (API) and helped demonstrate that other applications on the platform could be modeled using a REA approach.A Numbersbelieve tem vindo a desenvolver a plataforma OMNIA. Esta plataforma é uma aplicação web para o desenvolvimento de aplicações usando princípios Low-code, usando abordagens Agile. Modeling Entities é a aplicação que é usada na plataforma para criar novas entidades. O conceito OMNIA de Entidade tem as seguintes propriedades: Agents, Commitments, Documents, Events, Generic entities, Resources or Series. A maior parte destes conceitos estão de acordo com a ontologia REA mas não estão formalizados. Um dos objetivos da Numbersbelieve é ter uma formalização dos conceitos REA de acordo com a ontologia para a aplicação que cria as entidades na plataforma OMNIA e posteriormente para as outras aplicações. REA define uma ontologia empresarial desenvolvida por McCarthy (1979, 1982) tem sua origem nos sistemas de base de dados para contabilidade. Mais tarde Geerts and McCarthy (2002, 2006) estenderam o modelo original com novos conceitos. Para formalizar os conceitos da ontologia REA, esta pesquisa mostra o desenvolvimento de uma DSL textual com base na metodologia de desenvolvimento MDE que foca o desenvolvimento de software no modelo. Esta simplifica os processos de engenharia pois representa as ações e comportamentos de um sistema mesmo antes do início da fase de codificação. A pesquisa está estruturada de acordo com a DSRM. O DS é uma metodologia para resolver problemas que procuram inovar criando artefactos úteis que definem práticas, projetos e implementações e por isso é adequado a esta pesquisa que desenvolveu três artefactos para a formalização da DSL, um meta-modelo a sintaxe abstrata, uma linguagem textual a sintaxe concreta e um ficheiro Json para interação com a plataforma OMNIA. A primeira fase do DSRM foi identificar o problema que foi referido em cima. A seguinte concentra-se na identificação dos requisitos que identificaram os conceitos REA a serem incluídos no meta-modelo e na linguagem textual. Posteriormente, é feito o desenvolvimento dos artefactos e do editor da linguagem. O editor permite definir, com a DSL, os casos de uso fornecidos pela equipa da Numbersbelieve, corrigir falhas e melhorar a linguagem. Os resultados foram avaliados de acordo com o cumprimento dos requisitos. Foram todos foram concluídos com êxito. Com base na análise dos artefactos, do uso da linguagem e da interação com a plataforma OMNIA, através do ficheiro Json, conclui-se que a utilização da linguagem é adequada para interagir com a plataforma OMNIA através da sua API e ajudou a demonstrar que outras aplicações da plataforma podem ser modeladas usando uma abordagem REA

    Applying MDE tools to defining domain specific languages for model management

    Get PDF
    In the model driven engineering (MDE), modeling languages play a central role. They range from the most generic languages such as UML, to more individual ones, called domain-specific modeling languages (DSML). These languages are used to create and manage models and must accompany them throughout their life cycle and evolution. In this paper we propose a domain-specific language for model management, to facilitate the user's task, developed with techniques and tools used in the MDE paradigm.Fil: Pérez, Gabriela. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; ArgentinaFil: Irazábal, Jerónimo. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Pons, Claudia Fabiana. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; Argentina. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas; ArgentinaFil: Giandini, Roxana Silvia. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; Argentin

    Software development environments and tools in MDE

    Get PDF
    Abstract. Model-Driven Engineering (MDE) is the notion that we can construct a model of a system that we can then transform into the real thing. The development of software in MDE using Domain-Specific Languages (DSLs) has two phases. First, the development of artifacts such as DSLs and  transformation mechanisms by the modeling experts. Second, people non-technical experts (domain expert or end user) using the artifacts created develop applications simply because of the high level of abstraction allowed by technology. Several factors are considered to limit the use of MDE. One of them,  is the lack of knowledge the tools and the development activities with MDE. To support the MDE initiative, the present work makes a description of the theoretical foundations of MDE, also describes the main activities to build several MDE artifacts with some of the tools most known in this technology

    Teaching Model Driven Language Handling

    Get PDF
    Many universities teach computer language handling by mainly focussing on compiler theory, although MDD (model-driven development) and meta-modelling are increasingly important in the software industry as well as in computer science. In this article, we share some experiences from teaching a course in computer language handling where the focus is on MDD principles. We discuss the choice of tools and technologies used in demonstrations and exercises, and also give a brief glimpse of a prototype for a simple meta-model-based language handling tool that is currently being designed and considered for future use in teaching

    From a data-model to generated access-and store-patterns

    Get PDF
    This report describes the design and implementation of a repository generation tool that is used to generate repositories from domain models of the ASML TWINSCAN system. The TWINSCAN system handles a huge volume of data. In the current TWINSCAN SW Architecture, data transfer is combined with control flow. Data transfer to a component that is not under the sender’s control must be performed through a common parent in the hierarchy. There are several problems with this approach with respect to execution, encapsulation, and locality of change. These problems drive the need to separate data, control, and algorithms of the scanner’s software architecture. To tackle these problems, the main objective of this project was to design and implement a repository generation tool for generating data repositories from domain models. The structure of this data is defined by a domain model in an implementation independent formalism. The tool supports several flavors of repositories. As a result of the flexibility of the architecture, it is possible to switch between technologies and implementation patterns without touching domain models. The repository generation tool is tested through continues architecture and design reviews by supervisors, unit tests, and tests by stakeholders in the real environment. The results obtained in this project are being used in an active ASML project within the Metrology group. The results have improved productivity and increased efficiency

    Megamodeling Software Platforms: Automated Discovery of Usable Cartography from Available Metadata

    Get PDF
    International audienceModel-driven reverse engineering focuses on automatically discovering models from different kinds of available information on existing software systems. Although the source code of an application is often used as a basic input, this information may take various forms such as: design "models", bug reports, or any kind of documentation in general. All this metadata may have been either built manually or generated (semi)automatically during the whole software life cycle, from the specification and development phase to the effective running of the system. This paper proposes an automated and extensible MDE approach to build a usable cartography of a given platform from available metadata by combining several MDE techniques. As a running example, the approach has been applied to the TopCased MDE platform for Embedded & Real-Time Systems

    Parsing and Printing Java 7-15 by Extending an Existing Metamodel

    Get PDF
    Many technologies and frameworks are built upon the open source Eclipse Modelling Framework (EMF) to provide model-based software development or even model-based consistency preservation of software artifacts. In this context, not only EMF-based modeling of the source code but also parsing of the source code and printing the model again into source code files are required. The Java Model Parser and Printer (JaMoPP) provides an EMF-based environment for modeling, parsing and printing Java source code. However, it supports just the syntax of Java 5 and 6. Moreover, JaMoPP is based on some technologies that have technical problems and have not been further maintained. In this work, we extend the metamodel of JaMoPP to support Java versions 7-15. Our extensions expand the metamodel with new features, for instance, the diamond operator, lambda expressions, or modules. Moreover, we implemented our new parser and printer. The parser implementation is based on the Eclipse Java Development Tools (JDT) that is well maintained, which reduces the maintenance effort to extend our JaMoPP for new versions of Java

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    Kevoree Modeling Framework (KMF): Efficient modeling techniques for runtime use

    Get PDF
    The creation of Domain Specific Languages(DSL) counts as one of the main goals in the field of Model-Driven Software Engineering (MDSE). The main purpose of these DSLs is to facilitate the manipulation of domain specific concepts, by providing developers with specific tools for their domain of expertise. A natural approach to create DSLs is to reuse existing modeling standards and tools. In this area, the Eclipse Modeling Framework (EMF) has rapidly become the defacto standard in the MDSE for building Domain Specific Languages (DSL) and tools based on generative techniques. However, the use of EMF generated tools in domains like Internet of Things (IoT), Cloud Computing or Models@Runtime reaches several limitations. In this paper, we identify several properties the generated tools must comply with to be usable in other domains than desktop-based software systems. We then challenge EMF on these properties and describe our approach to overcome the limitations. Our approach, implemented in the Kevoree Modeling Framework (KMF), is finally evaluated according to the identified properties and compared to EMF.Comment: ISBN 978-2-87971-131-7; N° TR-SnT-2014-11 (2014
    corecore