15 research outputs found

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Exploring annotations for deductive verification

    Get PDF

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Quality of process modeling using BPMN: a model-driven approach

    Get PDF
    DissertaĆ§Ć£o para obtenĆ§Ć£o do Grau de Doutor em Engenharia InformĆ”ticaContext: The BPMN 2.0 specification contains the rules regarding the correct usage of the languageā€™s constructs. Practitioners have also proposed best-practices for producing better BPMN models. However, those rules are expressed in natural language, yielding sometimes ambiguous interpretation, and therefore, flaws in produced BPMN models. Objective: Ensuring the correctness of BPMN models is critical for the automation of processes. Hence, errors in the BPMN models specification should be detected and corrected at design time, since faults detected at latter stages of processesā€™ development can be more costly and hard to correct. So, we need to assess the quality of BPMN models in a rigorous and systematic way. Method: We follow a model-driven approach for formalization and empirical validation of BPMN well-formedness rules and BPMN measures for enhancing the quality of BPMN models. Results: The rule mining of BPMN specification, as well as recently published BPMN works, allowed the gathering of more than a hundred of BPMN well-formedness and best-practices rules. Furthermore, we derived a set of BPMN measures aiming to provide information to process modelers regarding the correctness of BPMN models. Both BPMN rules, as well as BPMN measures were empirically validated through samples of BPMN models. Limitations: This work does not cover control-flow formal properties in BPMN models, since they were extensively discussed in other process modeling research works. Conclusion: We intend to contribute for improving BPMN modeling tools, through the formalization of well-formedness rules and BPMN measures to be incorporated in those tools, in order to enhance the quality of process modeling outcomes

    Conservative and traceable executions of heterogeneous model management workflows

    Get PDF
    One challenge of developing large scale systems is knowing how artefacts are interrelated across tools and languages, especially when traceability is mandated e.g., by certifying authorities. Another challenge is the interoperability of all required tools to allow the software to be built, tested, and deployed efficiently as it evolves. Build systems have grown in popularity as they facilitate these activities. To cope with the complexities of the development process, engineers can adopt model-driven practices that allow them to raise the system abstraction level by modelling its domain, therefore, reducing the accidental complexity that comes from e.g., writing boilerplate code. However, model-driven practices come with challenges such as integrating heterogeneous model management tasks e.g., validation, and modelling technologies e.g., Simulink (a proprietary modelling environment for dynamic systems). While there are tools that support the execution of model-driven workflows, some support only specific modelling technologies, lack the generation of traceability information, or do not offer the cutting-edge features of build systems like conservative executions i.e., where only tasks affected by changes to resources are executed. In this work we propose ModelFlow, a workflow language and interpreter able to specify and execute model management workflows conservatively and produce traceability information as a side product. In addition, ModelFlow reduces the overhead of model loading and disposal operations by allowing model management tasks to share already loaded models during the workflow execution. Our evaluation shows that ModelFlow can perform conservative executions which can improve the performance times in some scenarios. ModelFlow is designed to support the execution of model management tasks targeting various modelling frameworks and can be used in conjunction with models from heterogeneous technologies. In addition to EMF models, ModelFlow can also handle Simulink models through a driver developed in the context of this thesis which was used to support one case study

    Consistency Modeling in a Multi-Model Architecture : Integrate and Celebrate Diversity

    Get PDF
    Central to Model-Driven Engineering (MDE) is seeing models as objects that can be handled and organized into metamodel stacks and multi-model architectures. This work contributes with a unique way of doing consistency modeling where the involved models are explicitly organized in a multi-model architecture; a general model for creating multi-model architectures that allows semantics to be attached is defined and applied; explicit attachment of semantics is demonstrated by attaching Java classes that implement different instantiation semantics in order to realize the consistency modeling and the automatic generation of consistency data. The kind of consistency addressed concerns relations between data residing in legacy databases defined by different schemas. The consistency modeling is meant to solve the problem of exposing inconsistencies by relating the data. The consistency modeling combines in a practical way visual modeling and logic (OCL). The approach is not limited to exposing inconsistencies, but may also be used to derive more general information given one or more data sets. The consistency is modeled by defining a consistency model that relates elements of two given legacy models. The consistency model is expressed in a language specially designed for consistency modeling. The language allows definition of classes, associations and invariants expressed in OCL. The interpretation of the language is special: Given one conforming data set for each of the legacy models, the consistency model may then be automatically instantiated to consistency data that tells if the data sets are consistent or not. The invariants are used to decide what instances to generate when making the consistency data. The amount of consistency data to create is finite and limited by the given data sets. The consistency model is instantiated until no more elements can be added without breaking some invariant or multiplicity. The consistency data is presented as a model which can be investigated by the user

    Towards Prescriptive Analytics in Cyber-Physical Systems

    Get PDF
    More and more of our physical world today is being monitored and controlled by so-called cyber-physical systems (CPSs). These are compositions of networked autonomous cyber and physical agents such as sensors, actuators, computational elements, and humans in the loop. Today, CPSs are still relatively small-scale and very limited compared to CPSs to be witnessed in the future. Future CPSs are expected to be far more complex, large-scale, wide-spread, and mission-critical, and found in a variety of domains such as transportation, medicine, manufacturing, and energy, where they will bring many advantages such as the increased efficiency, sustainability, reliability, and security. To unleash their full potential, CPSs need to be equipped with, among other features, the support for automated planning and control, where computing agents collaboratively and continuously plan and control their actions in an intelligent and well-coordinated manner to secure and optimize a physical process, e.g., electricity flow in the power grid. In todayā€™s CPSs, the control is typically automated, but the planning is solely performed by humans. Unfortunately, it is intractable and infeasible for humans to plan every action in a future CPS due to the complexity, scale, and volatility of a physical process. Due to these properties, the control and planning has to be continuous and automated in future CPSs. Humans may only analyse and tweak the systemā€™s operation using the set of tools supporting prescriptive analytics that allows them (1) to make predictions, (2) to get the suggestions of the most prominent set of actions (decisions) to be taken, and (3) to analyse the implications as if such actions were taken. This thesis considers the planning and control in the context of a large-scale multi-agent CPS. Based on the smart-grid use-case, it presents a so-called PrescriptiveCPS ā€“ which is (the conceptual model of) a multi-agent, multi-role, and multi-level CPS automatically and continuously taking and realizing decisions in near real-time and providing (human) users prescriptive analytics tools to analyse and manage the performance of the underlying physical system (or process). Acknowledging the complexity of CPSs, this thesis provides contributions at the following three levels of scale: (1) the level of a (full) PrescriptiveCPS, (2) the level of a single PrescriptiveCPS agent, and (3) the level of a component of a CPS agent software system. At the CPS level, the contributions include the definition of PrescriptiveCPS, according to which it is the system of interacting physical and cyber (sub-)systems. Here, the cyber system consists of hierarchically organized inter-connected agents, collectively managing instances of so-called flexibility, decision, and prescription models, which are short-lived, focus on the future, and represent a capability, an (userā€™s) intention, and actions to change the behaviour (state) of a physical system, respectively. At the agent level, the contributions include the three-layer architecture of an agent software system, integrating the number of components specially designed or enhanced to support the functionality of PrescriptiveCPS. At the component level, the most of the thesis contribution is provided. The contributions include the description, design, and experimental evaluation of (1) a unified multi-dimensional schema for storing flexibility and prescription models (and related data), (2) techniques to incrementally aggregate flexibility model instances and disaggregate prescription model instances, (3) a database management system (DBMS) with built-in optimization problem solving capability allowing to formulate optimization problems using SQL-like queries and to solve them ā€œinside a databaseā€, (4) a real-time data management architecture for processing instances of flexibility and prescription models under (soft or hard) timing constraints, and (5) a graphical user interface (GUI) to visually analyse the flexibility and prescription model instances. Additionally, the thesis discusses and exemplifies (but provides no evaluations of) (1) domain-specific and in-DBMS generic forecasting techniques allowing to forecast instances of flexibility models based on historical data, and (2) powerful ways to analyse past, current, and future based on so-called hypothetical what-if scenarios and flexibility and prescription model instances stored in a database. Most of the contributions at this level are based on the smart-grid use-case. In summary, the thesis provides (1) the model of a CPS with planning capabilities, (2) the design and experimental evaluation of prescriptive analytics techniques allowing to effectively forecast, aggregate, disaggregate, visualize, and analyse complex models of the physical world, and (3) the use-case from the energy domain, showing how the introduced concepts are applicable in the real world. We believe that all this contribution makes a significant step towards developing planning-capable CPSs in the future.Mehr und mehr wird heute unsere physische Welt Ć¼berwacht und durch sogenannte Cyber-Physical-Systems (CPS) geregelt. Dies sind Kombinationen von vernetzten autonomen cyber und physischen Agenten wie Sensoren, Aktoren, Rechenelementen und Menschen. Heute sind CPS noch relativ klein und im Vergleich zu CPS der Zukunft sehr begrenzt. ZukĆ¼nftige CPS werden voraussichtlich weit komplexer, grĆ¶ĆŸer, weit verbreiteter und unternehmenskritischer sein sowie in einer Vielzahl von Bereichen wie Transport, Medizin, Fertigung und Energie ā€“ in denen sie viele Vorteile wie erhƶhte Effizienz, Nachhaltigkeit, ZuverlƤssigkeit und Sicherheit bringen ā€“ anzutreffen sein. Um ihr volles Potenzial entfalten zu kƶnnen, mĆ¼ssen CPS unter anderem mit der UnterstĆ¼tzung automatisierter Planungs- und SteuerungsfunktionalitƤt ausgestattet sein, so dass Agents ihre Aktionen gemeinsam und kontinuierlich auf intelligente und gut koordinierte Weise planen und kontrollieren kƶnnen, um einen physischen Prozess wie den Stromfluss im Stromnetz sicherzustellen und zu optimieren. Zwar sind in den heutigen CPS Steuerung und Kontrolle typischerweise automatisiert, aber die Planung wird weiterhin allein von Menschen durchgefĆ¼hrt. Leider ist diese Aufgabe nur schwer zu bewƤltigen, und es ist fĆ¼r den Menschen schlicht unmƶglich, jede Aktion in einem zukĆ¼nftigen CPS auf Basis der KomplexitƤt, des Umfangs und der VolatilitƤt eines physikalischen Prozesses zu planen. Aufgrund dieser Eigenschaften mĆ¼ssen Steuerung und Planung in CPS der Zukunft kontinuierlich und automatisiert ablaufen. Der Mensch soll sich dabei ganz auf die Analyse und Einflussnahme auf das System mit Hilfe einer Reihe von Werkzeugen konzentrieren kƶnnen. Derartige Werkzeuge erlauben (1) Vorhersagen, (2) VorschlƤge der wichtigsten auszufĆ¼hrenden Aktionen (Entscheidungen) und (3) die Analyse und potentiellen Auswirkungen der zu fƤllenden Entscheidungen. Diese Arbeit beschƤftigt sich mit der Planung und Kontrolle im Rahmen groƟer Multi-Agent-CPS. Basierend auf dem Smart-Grid als Anwendungsfall wird ein sogenanntes PrescriptiveCPS vorgestellt, welches einem Multi-Agent-, Multi-Role- und Multi-Level-CPS bzw. dessen konzeptionellem Modell entspricht. Diese PrescriptiveCPS treffen und realisieren automatisch und kontinuierlich Entscheidungen in naher Echtzeit und stellen Benutzern (Menschen) Prescriptive-Analytics-Werkzeuge und Verwaltung der Leistung der zugrundeliegenden physischen Systeme bzw. Prozesse zur VerfĆ¼gung. In Anbetracht der KomplexitƤt von CPS leistet diese Arbeit BeitrƤge auf folgenden Ebenen: (1) Gesamtsystem eines PrescriptiveCPS, (2) PrescriptiveCPS-Agenten und (3) Komponenten eines CPS-Agent-Software-Systems. Auf CPS-Ebene umfassen die BeitrƤge die Definition von PrescriptiveCPS als ein System von wechselwirkenden physischen und cyber (Sub-)Systemen. Das Cyber-System besteht hierbei aus hierarchisch organisierten verbundenen Agenten, die zusammen Instanzen sogenannter Flexibility-, Decision- und Prescription-Models verwalten, welche von kurzer Dauer sind, sich auf die Zukunft konzentrieren und FƤhigkeiten, Absichten (des Benutzers) und Aktionen darstellen, die das Verhalten des physischen Systems verƤndern. Auf Agenten-Ebene umfassen die BeitrƤge die Drei-Ebenen-Architektur eines Agentensoftwaresystems sowie die Integration von Komponenten, die insbesondere zur besseren UnterstĆ¼tzung der FunktionalitƤt von PrescriptiveCPS entwickelt wurden. Der Schwerpunkt dieser Arbeit bilden die BeitrƤge auf der Komponenten-Ebene, diese umfassen Beschreibung, Design und experimentelle Evaluation (1) eines einheitlichen multidimensionalen Schemas fĆ¼r die Speicherung von Flexibility- and Prescription-Models (und verwandten Daten), (2) der Techniken zur inkrementellen Aggregation von Instanzen eines FlexibilitƤtsmodells und Disaggregation von Prescription-Models, (3) eines Datenbankmanagementsystem (DBMS) mit integrierter Optimierungskomponente, die es erlaubt, Optimierungsprobleme mit Hilfe von SQL-Ƥhnlichen Anfragen zu formulieren und sie ā€žin einer Datenbank zu lƶsenā€œ, (4) einer Echtzeit-Datenmanagementarchitektur zur Verarbeitung von Instanzen der Flexibility- and Prescription-Models unter (weichen oder harten) Zeitvorgaben und (5) einer grafische BenutzeroberflƤche (GUI) zur Visualisierung und Analyse von Instanzen der Flexibility- and Prescription-Models. DarĆ¼ber hinaus diskutiert und veranschaulicht diese Arbeit beispielhaft ohne detaillierte Evaluation (1) anwendungsspezifische und im DBMS integrierte Vorhersageverfahren, die die Vorhersage von Instanzen der Flexibility- and Prescription-Models auf Basis historischer Daten ermƶglichen, und (2) leistungsfƤhige Mƶglichkeiten zur Analyse von Vergangenheit, Gegenwart und Zukunft auf Basis sogenannter hypothetischer ā€žWhat-ifā€œ-Szenarien und der in der Datenbank hinterlegten Instanzen der Flexibility- and Prescription-Models. Die meisten der BeitrƤge auf dieser Ebene basieren auf dem Smart-Grid-Anwendungsfall. Zusammenfassend befasst sich diese Arbeit mit (1) dem Modell eines CPS mit Planungsfunktionen, (2) dem Design und der experimentellen Evaluierung von Prescriptive-Analytics-Techniken, die eine effektive Vorhersage, Aggregation, Disaggregation, Visualisierung und Analyse komplexer Modelle der physischen Welt ermƶglichen und (3) dem Anwendungsfall der EnergiedomƤne, der zeigt, wie die vorgestellten Konzepte in der Praxis Anwendung finden. Wir glauben, dass diese BeitrƤge einen wesentlichen Schritt in der zukĆ¼nftigen Entwicklung planender CPS darstellen.Mere og mere af vores fysiske verden bliver overvĆ„get og kontrolleret af sĆ„kaldte cyber-fysiske systemer (CPSer). Disse er sammensƦtninger af netvƦrksbaserede autonome IT (cyber) og fysiske (physical) agenter, sĆ„som sensorer, aktuatorer, beregningsenheder, og mennesker. I dag er CPSer stadig forholdsvis smĆ„ og meget begrƦnsede i forhold til de CPSer vi kan forvente i fremtiden. Fremtidige CPSer forventes at vƦre langt mere komplekse, storstilede, udbredte, og missionskritiske, og vil kunne findes i en rƦkke omrĆ„der sĆ„som transport, medicin, produktion og energi, hvor de vil give mange fordele, sĆ„som Ćøget effektivitet, bƦredygtighed, pĆ„lidelighed og sikkerhed. For at frigĆøre CPSernes fulde potentiale, skal de bl.a. udstyres med stĆøtte til automatiseret planlƦgning og kontrol, hvor beregningsagenter i samspil og lĆøbende planlƦgger og styrer deres handlinger pĆ„ en intelligent og velkoordineret mĆ„de for at sikre og optimere en fysisk proces, sĆ„som elforsyningen i elnettet. I nuvƦrende CPSer er styringen typisk automatiseret, mens planlƦgningen udelukkende er foretaget af mennesker. Det er umuligt for mennesker at planlƦgge hver handling i et fremtidigt CPS pĆ„ grund af kompleksiteten, skalaen, og omskifteligheden af en fysisk proces. PĆ„ grund af disse egenskaber, skal kontrol og planlƦgning vƦre kontinuerlig og automatiseret i fremtidens CPSer. Mennesker kan kun analysere og justere systemets drift ved hjƦlp af det sƦt af vƦrktĆøjer, der understĆøtter prƦskriptive analyser (prescriptive analytics), der giver dem mulighed for (1) at lave forudsigelser, (2) at fĆ„ forslagene fra de mest fremtrƦdende sƦt handlinger (beslutninger), der skal tages, og (3) at analysere konsekvenserne, hvis sĆ„danne handlinger blev udfĆørt. Denne afhandling omhandler planlƦgning og kontrol i forbindelse med store multi-agent CPSer. Baseret pĆ„ en smart-grid use case, prƦsenterer afhandlingen det sĆ„kaldte PrescriptiveCPS hvilket er (den konceptuelle model af) et multi-agent, multi-rolle, og multi-level CPS, der automatisk og kontinuerligt tager beslutninger i nƦr-realtid og leverer (menneskelige) brugere prƦskriptiveanalysevƦrktĆøjer til at analysere og hĆ„ndtere det underliggende fysiske system (eller proces). I erkendelse af kompleksiteten af CPSer, giver denne afhandling bidrag til fĆølgende tre niveauer: (1) niveauet for et (fuldt) PrescriptiveCPS, (2) niveauet for en enkelt PrescriptiveCPS agent, og (3) niveauet for en komponent af et CPS agent software system. PĆ„ CPS-niveau, omfatter bidragene definitionen af PrescriptiveCPS, i henhold til hvilken det er det system med interagerende fysiske- og IT- (under-) systemer. Her bestĆ„r IT-systemet af hierarkisk organiserede forbundne agenter der sammen styrer instanser af sĆ„kaldte fleksibilitet (flexibility), beslutning (decision) og prƦskriptive (prescription) modeller, som henholdsvis er kortvarige, fokuserer pĆ„ fremtiden, og reprƦsenterer en kapacitet, en (brugers) intention, og mĆ„der til at Ʀndre adfƦrd (tilstand) af et fysisk system. PĆ„ agentniveau omfatter bidragene en tre-lags arkitektur af et agent software system, der integrerer antallet af komponenter, der er specielt konstrueret eller udbygges til at understĆøtte funktionaliteten af PrescriptiveCPS. Komponentniveauet er hvor afhandlingen har sit hovedbidrag. Bidragene omfatter beskrivelse, design og eksperimentel evaluering af (1) et samlet multi- dimensionelt skema til at opbevare fleksibilitet og prƦskriptive modeller (og data), (2) teknikker til trinvis aggregering af fleksibilitet modelinstanser og disaggregering af prƦskriptive modelinstanser (3) et database management system (DBMS) med indbygget optimeringsproblemlĆøsning (optimization problem solving) der gĆør det muligt at formulere optimeringsproblemer ved hjƦlp af SQL-lignende forespĆørgsler og at lĆøse dem "inde i en database", (4) en realtids data management arkitektur til at behandle instanser af fleksibilitet og prƦskriptive modeller under (blĆøde eller hĆ„rde) tidsbegrƦnsninger, og (5) en grafisk brugergrƦnseflade (GUI) til visuelt at analysere fleksibilitet og prƦskriptive modelinstanser. Derudover diskuterer og eksemplificerer afhandlingen (men giver ingen evalueringer af) (1) domƦne-specifikke og in-DBMS generiske prognosemetoder der gĆør det muligt at forudsige instanser af fleksibilitet modeller baseret pĆ„ historiske data, og (2) kraftfulde mĆ„der at analysere tidligere-, nutids- og fremtidsbaserede sĆ„kaldte hypotetiske hvad-hvis scenarier og fleksibilitet og prƦskriptive modelinstanser gemt i en database. De fleste af bidragene pĆ„ dette niveau er baseret pĆ„ et smart-grid brugsscenarie. Sammenfattende giver afhandlingen (1) modellen for et CPS med planlƦgningsmulighed, (2) design og eksperimentel evaluering af prƦskriptive analyse teknikker der gĆør det muligt effektivt at forudsige, aggregere, disaggregere, visualisere og analysere komplekse modeller af den fysiske verden, og (3) brugsscenariet fra energiomrĆ„det, der viser, hvordan de indfĆørte begreber kan anvendes i den virkelige verden. Vi mener, at dette bidrag udgĆør et betydeligt skridt i retning af at udvikle CPSer til planlƦgningsbrug i fremtiden

    Constraint-based validation of e-learning courseware

    Get PDF

    HybridMDSD: Multi-Domain Engineering with Model-Driven Software Development using Ontological Foundations

    Get PDF
    Software development is a complex task. Executable applications comprise a mutlitude of diverse components that are developed with various frameworks, libraries, or communication platforms. The technical complexity in development retains resources, hampers efficient problem solving, and thus increases the overall cost of software production. Another significant challenge in market-driven software engineering is the variety of customer needs. It necessitates a maximum of flexibility in software implementations to facilitate the deployment of different products that are based on one single core. To reduce technical complexity, the paradigm of Model-Driven Software Development (MDSD) facilitates the abstract specification of software based on modeling languages. Corresponding models are used to generate actual programming code without the need for creating manually written, error-prone assets. Modeling languages that are tailored towards a particular domain are called domain-specific languages (DSLs). Domain-specific modeling (DSM) approximates technical solutions with intentional problems and fosters the unfolding of specialized expertise. To cope with feature diversity in applications, the Software Product Line Engineering (SPLE) community provides means for the management of variability in software products, such as feature models and appropriate tools for mapping features to implementation assets. Model-driven development, domain-specific modeling, and the dedicated management of variability in SPLE are vital for the success of software enterprises. Yet, these paradigms exist in isolation and need to be integrated in order to exhaust the advantages of every single approach. In this thesis, we propose a way to do so. We introduce the paradigm of Multi-Domain Engineering (MDE) which means model-driven development with multiple domain-specific languages in variability-intensive scenarios. MDE strongly emphasize the advantages of MDSD with multiple DSLs as a neccessity for efficiency in software development and treats the paradigm of SPLE as indispensable means to achieve a maximum degree of reuse and flexibility. We present HybridMDSD as our solution approach to implement the MDE paradigm. The core idea of HybidMDSD is to capture the semantics of particular DSLs based on properly defined semantics for software models contained in a central upper ontology. Then, the resulting semantic foundation can be used to establish references between arbitrary domain-specific models (DSMs) and sophisticated instance level reasoning ensures integrity and allows to handle partiucular change adaptation scenarios. Moreover, we present an approach to automatically generate composition code that integrates generated assets from separate DSLs. All necessary development tasks are arranged in a comprehensive development process. Finally, we validate the introduced approach with a profound prototypical implementation and an industrial-scale case study.Softwareentwicklung ist komplex: ausfĆ¼hrbare Anwendungen beinhalten und vereinen eine Vielzahl an Komponenten, die mit unterschiedlichen Frameworks, Bibliotheken oder Kommunikationsplattformen entwickelt werden. Die technische KomplexitƤt in der Entwicklung bindet Ressourcen, verhindert effiziente Problemlƶsung und fĆ¼hrt zu insgesamt hohen Kosten bei der Produktion von Software. ZusƤtzliche Herausforderungen entstehen durch die Vielfalt und Unterschiedlichkeit an KundenwĆ¼nschen, die der Entwicklung ein hohes MaƟ an FlexibilitƤt in Software-Implementierungen abverlangen und die Auslieferung verschiedener Produkte auf Grundlage einer Basis-Implementierung nƶtig machen. Zur Reduktion der technischen KomplexitƤt bietet sich das Paradigma der modellgetriebenen Softwareentwicklung (MDSD) an. Software-Spezifikationen in Form abstrakter Modelle werden hier verwendet um Programmcode zu generieren, was die fehleranfƤllige, manuelle Programmierung Ƥhnlicher Komponenten Ć¼berflĆ¼ssig macht. Modellierungssprachen, die auf eine bestimmte ProblemdomƤne zugeschnitten sind, nennt man domƤnenspezifische Sprachen (DSLs). DomƤnenspezifische Modellierung (DSM) vereint technische Lƶsungen mit intentionalen Problemen und ermƶglicht die Entfaltung spezialisierter Expertise. Um der Funktionsvielfalt in Software Herr zu werden, bietet der Forschungszweig der Softwareproduktlinienentwicklung (SPLE) verschiedene Mittel zur Verwaltung von VariabilitƤt in Software-Produkten an. Hierzu zƤhlen Feature-Modelle sowie passende Werkzeuge, um Features auf Implementierungsbestandteile abzubilden. Modellgetriebene Entwicklung, domƤnenspezifische Modellierung und eine spezielle Handhabung von VariabilitƤt in Softwareproduktlinien sind von entscheidender Bedeutung fĆ¼r den Erfolg von Softwarefirmen. Zur Zeit bestehen diese Paradigmen losgelƶst voneinander und mĆ¼ssen integriert werden, damit die Vorteile jedes einzelnen fĆ¼r die Gesamtheit der Softwareentwicklung entfaltet werden kƶnnen. In dieser Arbeit wird ein Ansatz vorgestellt, der dies ermƶglicht. Es wird das Multi-Domain Engineering Paradigma (MDE) eingefĆ¼hrt, welches die modellgetriebene Softwareentwicklung mit mehreren domƤnenspezifischen Sprachen in variabilitƤtszentrierten Szenarien beschreibt. MDE stellt die Vorteile modellgetriebener Entwicklung mit mehreren DSLs als eine Notwendigkeit fĆ¼r Effizienz in der Entwicklung heraus und betrachtet das SPLE-Paradigma als unabdingbares Mittel um ein Maximum an Wiederverwendbarkeit und FlexibilitƤt zu erzielen. In der Arbeit wird ein Ansatz zur Implementierung des MDE-Paradigmas, mit dem Namen HybridMDSD, vorgestellt
    corecore