246 research outputs found

    Achieving QVTO & ATL Interoperability: An Experience Report on the Realization of a QVTO to ATL Computer

    Get PDF
    With the emergence of a number of model transformation languages the need for interoperability among them increases. The degree at which this interoperability can be achieved between two given languages depends heavily on their paradigms (declarative vs imperative). Previous studies have indicated that the QVT and ATL languages are compatible. In this paper we study the possibility to compile QVT Operational to the ATL virtual machine. We describe our experience of developing such a compiler. The resulting compiled QVT transformations can run on top of existing ATL tools. Thereby we achieve not only QVT/ATL interoperability but also QVT conformance for the ATL tools as defined in the QVT specification

    Simulation of Road Traffic Applying Model-Driven Engineering

    Get PDF
    Road traffic is an important phenomenon in modern societies. The study of its different aspects in the multiple scenarios where it happens is relevant for a huge number of problems. At the same time, its scale and complexity make it hard to study. Traffic simulations can alleviate these difficulties, simplifying the scenarios to consider and controlling their variables. However, their development also presents difficulties. The main ones come from the need to integrate the way of working of researchers and developers from multiple fields. Model-Driven Engineering (MDE) addresses these problems using Modelling Languages (MLs) and semi-automatic transformations to organise and describe the development, from requirements to code. This paper presents a domain-specific MDE framework for simulations of road traffic. It comprises an extensible ML, support tools, and development guidelines. The ML adopts an agent-based approach, which is focused on the roles of individuals in road traffic and their decision-making. A case study shows the process to model a traffic theory with the ML, and how to specialise that specification for an existing target platform and its simulations. The results are the basis for comparison with related work

    A Catalog of Reusable Design Decisions for Developing UML/MOF-based Domain-specific Modeling Languages

    Get PDF
    In model-driven development (MDD), domain-specific modeling languages (DSMLs) act as a communication vehicle for aligning the requirements of domain experts with the needs of software engineers. With the rise of the UML as a de facto standard, UML/MOF-based DSMLs are now widely used for MDD. This paper documents design decisions collected from 90 UML/MOF-based DSML projects. These recurring design decisions were gained, on the one hand, by performing a systematic literature review (SLR) on the development of UML/MOF-based DSMLs. Via the SLR, we retrieved 80 related DSML projects for review. On the other hand, we collected decisions from developing ten DSML projects by ourselves. The design decisions are presented in the form of reusable decision records, with each decision record corresponding to a decision point in DSML development processes. Furthermore, we also report on frequently observed (combinations of) decision options as well as on associations between options which may occur within a single decision point or between two decision points. This collection of decision-record documents targets decision makers in DSML development (e.g., DSML engineers, software architects, domain experts).Series: Technical Reports / Institute for Information Systems and New Medi

    The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    Get PDF
    BACKGROUND: Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. RESULTS: The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. CONCLUSION: The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development

    A Model Driven Approach to Model Transformations

    Get PDF
    The OMG's Model Driven Architecture (MDA) initiative has been the focus of much attention in both academia and industry, due to its promise of more rapid and consistent software development through the increased use of models. In order for MDA to reach its full potential, the ability to manipulate and transform models { most obviously from the Platform Independent Model (PIM) to the Platform Specific Models (PSM) { is vital. Recognizing this need, the OMG issued a Request For Proposals (RFP) largely concerned with finding a suitable mechanism for trans- forming models. This paper outlines the relevant background material, summarizes the approach taken by the QVT-Partners (to whom the authors belong), presents a non-trivial example using the QVT-Partners approach, and finally sketches out what the future holds for model transformations

    Migrating C/C++ Software to Mobile Platforms in the ADM Context

    Get PDF
    Software technology is constantly evolving and therefore the development of applications requires adapting software components and applications in order to be aligned to new paradigms such as Pervasive Computing, Cloud Computing and Internet of Things. In particular, many desktop software components need to be migrated to mobile technologies. This migration faces many challenges due to the proliferation of different mobile platforms. Developers usually make applications tailored for each type of device expending time and effort. As a result, new programming languages are emerging to integrate the native behaviors of the different platforms targeted in development projects. In this direction, the Haxe language allows writing mobile applications that target all major mobile platforms. Novel technical frameworks for information integration and tool interoperability such as Architecture-Driven Modernization (ADM) proposed by the Object Management Group (OMG) can help to manage a huge diversity of mobile technologies. The Architecture-Driven Modernization Task Force (ADMTF) was formed to create specifications and promote industry consensus on the modernization of existing applications. In this work, we propose a migration process from C/C++ software to different mobile platforms that integrates ADM standards with Haxe. We exemplify the different steps of the process with a simple case study, the migration of “the Set of Mandelbrot” C++ application. The proposal was validated in Eclipse Modeling Framework considering that some of its tools and run-time environments are aligned with ADM standards

    Automatic generation of software applications: a platform-based MDA approach

    Get PDF
    The Model Driven Architecture (MDA) allows moving the software development from the time consuming and error-prone level of writing program code to the next higher level of modeling. In order to gain benefit from this innovative technology, it is necessary to satisfy two requirements. These are first, the creation of compact, complete and correct platform independent models (PIM) and second, the development of a flexible and extensible model transformation framework taking into account frequent changes of the target platform. In this thesis a platform-based methodology is developed to create PIM by abstracting common modeling elements into a platform independent modeling library called Design Platform Model (DPM). The DPM contains OCL-based types for modeling primitive and collection types, a platform independent GUI toolkit as well as other common modeling elements, such as those for IO-operations. Furthermore, a DPM profile containing diverse domain specific and design pattern-based stereotypes is also developed to create PIM with high-level semantics. The behavior in PIM is specified using an OCL-like action language called eXecutable OCL (XOCL), which is also developed in this thesis. For model transformation, the model compiler MOCCA is developed based on a flexible and extensible architecture. The model mapper components in the current version of MOCCA are able to map desktop applications onto JSE platform; the both business object layer and persistence layer of a three-layered enterprise applications onto JEE platform and SAP ABAP platform. The entire model transformation process is finished with complete code generation

    Compilation of Heterogeneous Models: Motivations and Challenges

    Get PDF
    International audienceThe widespread use of model driven engineering in the development of software-intensive systems, including high-integrity embedded systems, gave rise to a "Tower of Babel" of modeling languages. System architects may use languages such as OMG SysML and MARTE, SAE AADL or EAST-ADL; control and command engineers tend to use graphical tools such as MathWorks Simulink/Stateflow or Esterel Technologies SCADE, or textual languages such as MathWorks Embedded Matlab; software engineers usually rely on OMG UML; and, of course, many in-house domain specific languages are equally used at any step of the development process. This heterogeneity of modeling formalisms raises several questions on the verification and code generation for systems described using heterogeneous models: How can we ensure consistency across multiple modeling views? How can we generate code, which is optimized with respect to multiple modeling views? How can we ensure model-level verification is consistent with the run-time behavior of the generated executable application?In this position paper we describe the motivations and challenges of analysis and code generation from heterogeneous models when intra-view consistency, optimization and safety are major concerns. We will then introduce Project P 2 and Hi-MoCo 3-respectively FUI and Eurostars-funded collaborative projects tackling the challenges above. This work continues and extends, in a wider context, the work carried out by the Gene-Auto 4 project [1], [2]. Hereby we will present the key elements of Project P and Hi-MoCo, in particular: (i) the philosophy for the identification of safe and minimal practical subsets of input modeling languages; (ii) the overall architecture of the toolsets, the supported analysis techniques and the target languages for code generation; and finally, (iii) the approach to cross-domain qualification for an open-source, community-driven toolset

    Resources Events Agents (REA), a text DSL for OMNIA Entities

    Get PDF
    The Numbersbelieve has been developing the OMNIA platform. This is a web application platform for developing applications using Low-code principles, using Agile approaches. Modeling Entities is an application that is used on the platform to create new entities. The OMNIA Entity concept has the following properties: Agents, Commitments, Documents, Events, entities, Resources or Series. Most of these concepts are in accordance with the Resources Events Agents (REA) ontology but are not formalized. One of the goals of Numbersbelieve is a formalization of the REA concepts according to the ontology for the application that creates entities on OMNIA platform and later for other applications. REA defines an enterprise ontology developed by McCarthy (1979, 1982) has its origin in accounting database systems. Later Geerts and McCarthy (2002, 2006) extended the original model with new concepts. To formalize the concepts of the REA ontology, this research shows the development of a textual Domain-Specific Language (DSL) based on the development methodology Model Driven Engineering (MDE) which focuses software development on models. This simplifies the engineering processes as it represents the actions and behaviors of a system even before the start of the coding phase. This research is structured according to the Design Science Research Methodology (DSRM). The Design Science (DS) is a methodology for solving problems that seek to innovate by creating useful artifacts that define practices, projects and implementations and is therefore suitable for this research. This research developed three artifacts for the formalization of the DSL, a meta-model the abstract syntax, a textual language the concrete syntax and a Json file for interaction with OMNIA. The first phase of DSRM was to identify the problem that was mentioned above. The following focuses on the identification of requirements which identified the REA concepts to be included in the meta-model and textual language. Subsequently, the development of the artifacts and the editor of the language. The editor allows use cases, provided by the Numbersbelieve team, to be defined with the DSL language, correct faults and improve the language. The results were evaluated according the objectives and requirements, all successfully completed. Based on the analysis of the artifacts, the use of the language and the interaction with the OMNIA platform, through the Json file, it is concluded that the use of the DSL language is suitable to interact with the OMNIA platform through the Application Program Interface (API) and helped demonstrate that other applications on the platform could be modeled using a REA approach.A Numbersbelieve tem vindo a desenvolver a plataforma OMNIA. Esta plataforma é uma aplicação web para o desenvolvimento de aplicações usando princípios Low-code, usando abordagens Agile. Modeling Entities é a aplicação que é usada na plataforma para criar novas entidades. O conceito OMNIA de Entidade tem as seguintes propriedades: Agents, Commitments, Documents, Events, Generic entities, Resources or Series. A maior parte destes conceitos estão de acordo com a ontologia REA mas não estão formalizados. Um dos objetivos da Numbersbelieve é ter uma formalização dos conceitos REA de acordo com a ontologia para a aplicação que cria as entidades na plataforma OMNIA e posteriormente para as outras aplicações. REA define uma ontologia empresarial desenvolvida por McCarthy (1979, 1982) tem sua origem nos sistemas de base de dados para contabilidade. Mais tarde Geerts and McCarthy (2002, 2006) estenderam o modelo original com novos conceitos. Para formalizar os conceitos da ontologia REA, esta pesquisa mostra o desenvolvimento de uma DSL textual com base na metodologia de desenvolvimento MDE que foca o desenvolvimento de software no modelo. Esta simplifica os processos de engenharia pois representa as ações e comportamentos de um sistema mesmo antes do início da fase de codificação. A pesquisa está estruturada de acordo com a DSRM. O DS é uma metodologia para resolver problemas que procuram inovar criando artefactos úteis que definem práticas, projetos e implementações e por isso é adequado a esta pesquisa que desenvolveu três artefactos para a formalização da DSL, um meta-modelo a sintaxe abstrata, uma linguagem textual a sintaxe concreta e um ficheiro Json para interação com a plataforma OMNIA. A primeira fase do DSRM foi identificar o problema que foi referido em cima. A seguinte concentra-se na identificação dos requisitos que identificaram os conceitos REA a serem incluídos no meta-modelo e na linguagem textual. Posteriormente, é feito o desenvolvimento dos artefactos e do editor da linguagem. O editor permite definir, com a DSL, os casos de uso fornecidos pela equipa da Numbersbelieve, corrigir falhas e melhorar a linguagem. Os resultados foram avaliados de acordo com o cumprimento dos requisitos. Foram todos foram concluídos com êxito. Com base na análise dos artefactos, do uso da linguagem e da interação com a plataforma OMNIA, através do ficheiro Json, conclui-se que a utilização da linguagem é adequada para interagir com a plataforma OMNIA através da sua API e ajudou a demonstrar que outras aplicações da plataforma podem ser modeladas usando uma abordagem REA

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
    corecore