12 research outputs found

    Applying model-based systems engineering in search of quality by design

    Get PDF
    2022 Spring.Includes bibliographical references.Model-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product

    Engineering adaptive web applications

    Get PDF
    [no abstract

    Model driven software modernisation

    Get PDF
    Constant innovation of information technology and ever-changing market requirements relegate more and more existing software to legacy status. Generating software through reusing legacy systems has been a primary solution and software re-engineering has the potential to improve software productivity and quality across the entire software life cycle. The classical re-engineering technology starts at the level of program source code which is the most or only reliable information on a legacy system. The program specification derived from legacy source code will then facilitate the migration of legacy systems in the subsequent forward engineering steps. A recent research trend in re-engineering area carries this idea further and moves into model driven perspective that the specification is presented with models. The thesis focuses on engaging model technology to modernise legacy systems. A unified approach, REMOST (Re-Engineering through MOdel conStruction and Transformation), is proposed in the context of Model Driven Architecture (MDA). The theoretical foundation is the construction of a WSL-based Modelling Language, known as WML, which is an extension of WSL (Wide Spectrum Language). WML is defined to provide a spectrum of models for the system re-engineering, including Common Modelling Language (CML), Architecture Description Language (ADL) and Domain Specific Modelling Language (DSML). 9rtetaWML is designed for model transformation, providing query facilities, action primitives and metrics functions. A set of transformation rules are defined in 9rtetaWML to conduct system abstraction and refactoring. Model transformation for unifying WML and UML is also provided, which can bridge the legacy systems to MDA. The architecture and working flow of the REMOST approach are proposed and a prototype tool environment is developed for testing the approach. A number of case studies are used for experiments with the approach and the prototype tool, which show that the proposed approach is feasible and promising in its domain. Conclusion is drawn based on analysis and further research directions are also discussed

    Architecture design in global and model-centric software development

    Get PDF
    This doctoral dissertation describes a series of empirical investigations into representation, dissemination and coordination of software architecture design in the context of global software development. A particular focus is placed on model-centric and model-driven software development.LEI Universiteit LeidenAlgorithms and the Foundations of Software technolog

    A Model-Driven Methodology for Critical Systems Engineering

    Get PDF
    Model-Driven Engineering (MDE) promises to enhance system development by reducing development time, and increasing productivity and quality. MDE is gaining popularity in several industry sectors, and is attractive also for critical systems where they can reduce efforts and costs for verification and validation (V&V), and can ease certification. This thesis proposes a novel model-driven life cycle that is tailored to the development of critical railway systems. It also integrates an original approach for model-driven system validation, based on a new model named Computation Independent Test model (CIT). Moreover, the process supports the Failure Modes and Effect Analysis (FMEA), with a novel approach to conduct Model-Driven FMEA, based on custom SysML Diagram, namely the FMEA Diagram, and Prolog. The approaches have been experimented in multiple real-world case studies, from railway and automative domains

    Statistical analysis and simulation of design models evolution

    Get PDF
    Tools, algorithms and methods in the context of Model-Driven Engineering (MDE) have to be assessed, evaluated and tested with regard to different aspects such as correctness, quality, scalability and efficiency. Unfortunately, appropriate test models are scarcely available and those which are accessible often lack desired properties. Therefore, one needs to resort to artificially generated test models in practice. Many services and features of model versioning systems are motivated from the collaborative development paradigm. Testing such services does not require single models, but rather pairs of models, one being derived from the other one by applying a known sequence of edit steps. The edit operations used to modify the models should be the same as in usual development environments, e.g. adding, deleting and changing of model elements in visual model editors. Existing model generators are motivated from the testing of model transformation engines, they do not consider the true nature of evolution in which models are evolved through iterative editing steps. They provide no or very little control over the generation process and they can generate only single models rather than model histories. Moreover, the generation of stochastic and other properties of interest also are not supported in the existing approaches. Furthermore, blindly generating models through random application of edit operations does not yield useful models, since the generated models are not (stochastically) realistic and do not reflect true properties of evolution in real software systems. Unfortunately, little is known about how models of real software systems evolve over time, what are the properties and characteristics of evolution, how one can mathematically formulate the evolution and simulate it. To address the previous problems, we introduce a new general approach which facilitates generating (stochastically) realistic test models for model differencing tools and tools for analyzing model histories. We propose a model generator which addresses the above deficiencies and generates or modifies models by applying proper edit operations. Fine control mechanisms for the generation process are devised and the generator supports stochastic and other properties of interest in the generated models. It also can generate histories, i.e. related sequences, of models. Moreover, in our approach we provide a methodological framework for capturing, mathematically representing and simulating the evolution of real design models. The proposed framework is able to capture the evolution in terms of edit operations applied between revisions. Mathematically, the representation of evolution is based on different statistical distributions as well as different time series models. Forecasting, simulation and generation of stochastically realistic test models are discussed in detail. As an application, the framework is applied to the evolution of design models obtained from sample a set of carefully selected Java systems. In order to study the the evolution of design models, we analyzed 9 major Java projects which have at least 100 revisions. We reverse engineered the design models from the Java source code and compared consecutive revisions of the design models. The observed changes were expressed in terms of two sets of edit operations. The first set consists of 75 low-level graph edit operations, e.g. add, delete, etc. of nodes and edges of the abstract syntax graph of the models. The second set consists of 188 high-level (user-level) edit operations which are more meaningful from a developer’s point of view and are frequently found in visual model editors. A high-level operation typically comprises several low-level operations and is considered as one user action. In our approach, we mathematically formulated the pairwise evolution, i.e. changes between each two subsequent revisions, using statistical models (distributions). In this regard, we initially considered many distributions which could be promising in modeling the frequencies of the observed low-level and high-level changes. Six distributions were very successful in modeling the changes and able to model the evolution with very good rates of success. To simulate the pairwise evolution, we studied random variate generation algorithms of our successful distributions in detail. For four of our distributions which no tailored algorithms existed, we indirectly generated their random variates. The chronological (historical) evolution of design models was modeled using three kinds of time series models, namely ARMA, GARCH and mixed ARMA-GARCH. The comparative performance of the time series models for handling the dynamics of evolution as well as accuracies of their forecasts was deeply studied. Roughly speaking, our studies show that mixed ARMA-GARCH models are superior to other models. Moreover, we discuss the simulation aspects of our proposed time series models in detail. The knowledge gained through statistical analysis of the evolution was then used in our test model generator in order to generate more realistic test models for model differencing, model versioning, history analysis tools, etc.Im Kontext der modellgetriebenen Entwicklung müssen Werkzeuge, Algorithmen und Methoden bewertet, evaluiert und getestet werden. Dabei spielen verschiedene Aspekte wie Korrektheit, Qualität, Skalierbarkeit und Effizienz eine grosse Rolle. Problematisch dabei ist, dass geeignete Testmodelle nur spärlich verfügbar sind. Verfügbare Modelle weisen darüber hinaus die für Evaluationszwecke gewünschten Eigenschaften oft nicht auf. Aus diesem Grund muss in der Praxis auf künstlich erzeugte Testmodelle zurückgegriffen werden. Viele der Funktionalitäten von Modellversionierungssystemen sind motiviert von den Paradigmen der kollaborativen (Software) Entwicklung. Für das Testen derartiger Funktionalitäten braucht man keine einzelnen Modelle, sondern Paare von Modellen, bei denen das Zweite durch Anwendungen einer bekannten Sequenz von Editierschritten auf das Erste erzeugt wird. Die genutzten Editieroperationen sollten dabei die gleichen sein, die bei den typischen Entwicklungsumgebungen angewendet werden, beispielsweise das Hinzufügen, Löschen oder Verändern von Modellelementen in visuellen Editoren. Derzeit existierende Modellgeneratoren sind motiviert durch das Testen von Modelltransformationsumgebungen. Dabei berücksichtigen sie nicht die wahre Natur der (Software) Evolution, bei der die Modelle iterativ durch die kontrollierte Anwendung einzelner Editierschritte verändert werden. Dabei bieten sie nur wenig Kontrolle über den Prozess der Generierung und sie können nur einzelne Modelle, aber keine Modellhistorien, erzeugen. Darüber hinaus werden gewünschte Eigenschaften, beispielsweise eine stochastisch kontrollierte Erzeugung, von den derzeit existierenden Ansätzen nicht unterstützt. Aufgrund der (blinden) zufallsgesteuerten Anwendungen von Editieroperationen werden keine brauchbaren, (stochastisch) realistischen Modelle generiert. Dadurch repräsentieren sie keine Eigenschaften von Evolutionen in echten Systemen. Leider gibt es wenig wissenschaftliche Erkenntnis darüber, wie Modelle in realen Systemen evolvieren, was die Eigenschaften und Charakteristika einer solchen Evolution sind und wie man diese mathematisch formulieren und simulieren kann. Um die zuvor genannten Probleme zu adressieren, stellen wir einen allgemeinen Ansatz zur (stochastischen) Generierung realer Testmodelle zur Verwendung in Differenzwerkzeugen und Historienanalysen vor. Unser Generator generiert oder modifiziert Modelle durch geeignete Anwendung von Editieroperationen. Sowohl feine Kontrollmechanismen für den Generierungsprozess als auch die Unterstützung von stochastischen und anderen interessanten Eigenschaften in den generierten Modellen zeichnen den Generator aus. Zusätzlich kann dieser Historien, d.h. abhängige/zusammenhängende Änderungssequenzen, von Modellen generieren. Unser Ansatz bietet eine methodische Umgebung für das Aufzeichnen, die mathematische Repräsentation als auch das Simulieren von Evolutionen realer Modelle. Die vorgestellte Umgebung kann die Evolution in Form von Editieroperationen, angewandt zwischen Revisionen, erfassen. Die mathematische Repräsentation der Evolution basiert sowohl auf verschiedenen stochastischen Verteilungen als auch unterschiedlichen Modellen von Zeitreihen. Das Vorhersagen, Simulieren und Generieren von stochastisch realistischen Testmodellen wird im Detail diskutiert. Als praktische Anwendung setzen wir unsere Umgebung im Rahmen einer Modellevolution von sorgfältig ausgewählten Java-Systemen ein. Im Rahmen dieser Arbeit wurde die Evolution von Design Modellen auf Basis von neun Open-Source Java Projekten analysiert. Für jedes Projekt lagen mindestens 100 Revisionen vor, aus deren Quelltexten Design Modelle nachkonstruiert wurden. Die dabei gefunden Änderungen konnten anhand zwei verschiedener Mengen von Editieroperationen beschrieben werden. Die erste Menge besteht aus 75 einfachen Graph-Operationen. Beispiele dafür sind das Hinzufügen, Löschen, etc. einzelner Knoten und Kanten im abstrakten Syntax-Graphen der Modelle. Die zweite Menge enthält 188 komplexe Editieroperationen. Komplexe Editieroperationen haben für Entwickler eine höhere Aussagekraft, da sie auf dem gewohnten Abstraktionsniveau des Entwicklers angesiedelt und oftmals in visuellen Modelleditoren zu finden sind. Typischerweise besteht eine komplexe Editieroperation dabei aus mehreren einfachen Operationen, wobei die Ausführung der komplexen Operation immer als eine einzelne Aktion angesehen wird. Um die schrittweise Evolution, also die Veränderung aufeinanderfolgender Revisionen, zu analysieren betrachteten wir verschiedene statistische Modelle (Distributionen). Von allen betrachteten Distributionen erwiesen sich sechs als sehr erfolgreich dabei die beobachteten Veränderungen und die Evolution der Modelle auf Basis einfacher und komplexer Editieroperationen zu beschreiben. Um die Evolution weiter simulieren zu können, betrachteten wir Algorithmen für die Erstellung von Zufallsvariaten der erfolgreichen Distributionen. Für vier der Distributionen, für die keine derartigen Algorithmen verfügbar waren, wurden die Zufallsvariate indirekt abgeleitet. Die chronologische (historische) Evolution von Modellen wurde auf Basis von drei Zeitreihen nachgebildet, konkret ARMA, GARCH und einer Mischung aus ARMA-GARCH. Sowohl deren Leistungsfähigkeit, Evolutionsdynamik darstellen zu können, als auch die Genauigkeit von Vorhersagen wurden im Detail analysiert und gegenübergestellt. Grob gesagt zeigen unsere Ergebnisse, dass ARMA-GARCH Modelle besser als die übrigen geeignet sind. Zusätzlich diskutieren wir ausführlich die Simulationsmöglichkeiten aller vorgestellten Zeitreihen. Die Ergebnisse unserer statistischen Analysen der Evolution haben wir dann in unserem Testmodell Generator eingesetzt. So konnten wir realistische Testmodelle generieren, die für Modelldifferenz-, Versionierungs- und Historienanalysewerkzeuge u.s.w. verwendet werden können
    corecore