26 research outputs found

    Multi-paradigm modelling for cyber–physical systems: a descriptive framework

    Get PDF
    The complexity of cyber–physical systems (CPSS) is commonly addressed through complex workflows, involving models in a plethora of different formalisms, each with their own methods, techniques, and tools. Some workflow patterns, combined with particular types of formalisms and operations on models in these formalisms, are used successfully in engineering practice. To identify and reuse them, we refer to these combinations of workflow and formalism patterns as modelling paradigms. This paper proposes a unifying (Descriptive) Framework to describe these paradigms, as well as their combinations. This work is set in the context of Multi-Paradigm Modelling (MPM), which is based on the principle to model every part and aspect of a system explicitly, at the most appropriate level(s) of abstraction, using the most appropriate modelling formalism(s) and workflows. The purpose of the Descriptive Framework presented in this paper is to serve as a basis to reason about these formalisms, workflows, and their combinations. One crucial part of the framework is the ability to capture the structural essence of a paradigm through the concept of a paradigmatic structure. This is illustrated informally by means of two example paradigms commonly used in CPS: Discrete Event Dynamic Systems and Synchronous Data Flow. The presented framework also identifies the need to establish whether a paradigm candidate follows, or qualifies as, a (given) paradigm. To illustrate the ability of the framework to support combining paradigms, the paper shows examples of both workflow and formalism combinations. The presented framework is intended as a basis for characterisation and classification of paradigms, as a starting point for a rigorous formalisation of the framework (allowing formal analyses), and as a foundation for MPM tool development

    Enhancing System Realisation in Formal Model Development

    Get PDF
    Software for mission-critical systems is sometimes analysed using formal specification to increase the chances of the system behaving as intended. When sufficient insights into the system have been obtained from the formal analysis, the formal specification is realised in the form of a software implementation. One way to realise the system's software is by automatically generating it from the formal specification -- a technique referred to as code generation. However, in general it is difficult to make guarantees about the correctness of the generated code -- especially while requiring automation of the steps involved in realising the formal specification. This PhD dissertation investigates ways to improve the automation of the steps involved in realising and validating a system based on a formal specification. The approach aims to develop properly designed software tools which support the integration of formal methods tools into the software development life cycle, and which leverage the formal specification in the subsequent validation of the system. The tools developed use a new code generation infrastructure that has been built as part of this PhD project and implemented in the Overture tool -- a formal methods tool that supports the Vienna Development Method. The development of the code generation infrastructure has involved the re-design of the software architecture of Overture. The new architecture brings forth the reuse and extensibility features of Overture to take into account the needs and requirements of software extensions targeting Overture. The tools developed in this PhD project have successfully supported three case studies from externally funded projects. The feedback received from the case study work has further helped improve the code generation infrastructure and the tools built using it

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    Fuzzy approach to construction activity estimation

    Get PDF
    Past experience has shown that variations in production rate value for the same work item is attributed to a wide range of factors. The relationships between these factors and the production rates are often very complex. It is impossible to describe an exact mathematical causal relationship between the qualitative factors(QF) and production rates. Various subjective approaches have been attempted to quantify the uncertainties contained in these causal relationships. This thesis presents one such approach by adopting a fuzzy set theory in conjunction with a fuzzy rule based system that could improve the quantification of the qualitative factors in estimating construction activity durations and costs. A method to generate a Standard Activity Unit Rate(SAUR) is presented. A construction activity can be defined by combining the Design Breakdown Structure, Trade Breakdown Structure and Work Section Breakdown Structure. By establishing the data structure of an activity, it is possible to synthesis the SAUR from published estimating sources in a systematic way. After the SAUR is defined, it is then used as a standard value from which an appropriate Activity Unit Rate(AUR) can be determined. A proto-type fuzzy rule based system called 'Fuzzy Activity Unit Rate Analyser(FAURA)' was developed to formalise a systematic framework for the QF quantification process in determining the most likely activity duration/cost. The compatibility measurement method proposed by Nafarieh and Keller has been applied as an inference strategy for FAURA. A computer program was developed to implement FAURA using Turbo Prolog. FAURA was tested and analysed by using a hypothetical bricklayer's activity in conjunction with five major QF as the input variables. The results produced by FAURA iii show that it can be applied usefully to overcome many of the problems encountered in the QF quantification process. In addition, the analysis shows that a fuzzy rule base approach provides the means to model and study the variability of AUR. Although the domain problem of this research was in estimation of activity duration/cost, the principles and system presented in this study are not limited to this specific area, and can be applied to a wide range of other disciplines involving uncertainty quantification problems. Further, this research highlights how the existing subjective methods in activity duration/cost estimation can be enhanced by utilising fuzzy set theory and fuzzy logic

    The use of systems engineering principles for the integration of existing models and simulations

    Get PDF
    With the rise in computational power, the prospect of simulating a complex engineering system with a high degree of accuracy and in a meaningful way is becoming a real possibility. Modelling and simulation have become ubiquitous throughout the engineering life cycle, as a consequence there are many thousands of existing models and simulations that are potential candidates for integration. This work is concerned with ascertaining if systems engineering principles are of use in the support of virtual testing, from desire to test, designing experiments, specifying simulations, selecting models and simulations, integrating component parts, verifying that the work is as specified, and validating that any outcomes are meaningful. A novel representation of systems engineering framework is proposed and forms the bases for the methods that were developed. It takes the core systems engineering principles and expresses them in a way that can be implemented in a variety of ways. An end to end process for virtual testing with the potential to use existing models and simulations is proposed, it provides structure and order to the testing task. A key part of the proposed process is the recognition that models and simulations requirements are different from those of the system being designed, and hence a modelling and simulation specific writing guide is produced. The automation of any engineering task has the potential to reduce the time to market of the final product, for this reason the potential of natural language processing technology to hasten the proposed processes was investigated. Two case studies were selected to test and demonstrate the potential of the novel approach, the first being an investigation into material selection for a squash ball, and the second being automotive in nature concerned with combining steering and braking systems. The processes and methods indicated their potential value, especially in the automotive case study where inconsistences were identified that could have otherwise affected the successful integration. This capability, combined with the verification stages, improves the confidence of any model and simulation integration. The NLP proof of concept software also demonstrated that such technology has value in the automation of integration. With further testing and development there is the possibility to create a software package to guide engineers through the difficult task of virtual testing. Such a tool would have the potential to drastically reduce the time to market of complex products

    A Reference Structure for Modular Model-based Analyses

    Get PDF
    Kontext: In dieser Arbeit haben wir die Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit von modellbasierten Analysen untersucht. Darum untersuchten wir die Wechselbeziehungen zwischen Modellen und Analysen, insbesondere die Struktur und Abhängigkeiten von Artefakten und die Dekomposition und Komposition von modellbasierten Analysen. Herausforderungen: Softwareentwickler verwenden Modelle von Softwaresystemen, um die Evolvierbarkeit und Wiederverwendbarkeit eines Architekturentwurfs zu bestimmen. Diese Modelle ermöglichen die Softwarearchitektur zu analysieren, bevor die erste Zeile Code geschreiben wird. Aufgrund evolutionärer Veränderungen sind modellbasierte Analysen jedoch auch anfällig für eine Verschlechterung der Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit. Diese Probleme lassen sich auf die Ko-Evolution von Modellierungssprache und Analyse zurückführen. Der Zweck einer Analyse ist die systematische Untersuchung bestimmter Eigenschaften eines zu untersuchenden Systems. Nehmen wir zum Beispiel an, dass Softwareentwickler neue Eigenschaften eines Softwaresystems analysieren wollen. In diesem Fall müssen sie Merkmale der Modellierungssprache und die entsprechenden modellbasierten Analysen anpassen, bevor sie neue Eigenschaften analysieren können. Merkmale in einer modellbasierten Analyse sind z.\,B. eine Analysetechnik, die eine solche Qualitätseigenschaft analysiert. Solche Änderungen führen zu einer erhöhten Komplexität der modellbasierten Analysen und damit zu schwer zu pflegenden modellbasierten Analysen. Diese steigende Komplexität verringert die Verständlichkeit der modellbasierten Analysen. Infolgedessen verlängern sich die Entwicklungszyklen, und die Softwareentwickler benötigen mehr Zeit, um das Softwaresystem an veränderte Anforderungen anzupassen. Stand der Technik: Derzeitige Ansätze ermöglichen die Kopplung von Analysen auf einem System oder über verteilte Systeme hinweg. Diese Ansätze bieten die technische Struktur für die Kopplung von Simulationen, nicht aber eine Struktur wie Komponenten (de)komponiert werden können. Eine weitere Herausforderung beim Komponieren von Analysen ist der Verhaltensaspekt, der sich darin äußert, wie sich die Analysekomponenten gegenseitig beeinflussen. Durch die Synchronisierung jeder beteiligten Simulation erhöht die Modularisierung von Simulationen den Kommunikationsbedarf. Derzeitige Ansätze erlauben es, den Kommunikationsaufwand zu reduzieren; allerdings werden bei diesen Ansätzen die Dekomposition und Komposition dem Benutzer überlassen. Beiträge: Ziel dieser Arbeit ist es, die Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit von modellbasierten Analysen zu verbessern. Zu diesem Zweck wird die Referenzarchitektur für domänenspezifische Modellierungssprachen als Grundlage genommen und die Übertragbarkeit der Struktur der Referenzarchitektur auf modellbasierte Analysen untersucht. Die geschichtete Referenzarchitektur bildet die Abhängigkeiten der Analysefunktionen und Analysekomponenten ab, indem sie diese bestimmten Schichten zuordnet. Wir haben drei Prozesse für die Anwendung der Referenzarchitektur entwickelt: (i) Refactoring einer bestehenden modellbasierten Analyse, (ii) Entwurf einer neuen modellbasierten Analyse und (iii) Erweiterung einer bestehenden modellbasierten Analyse. Zusätzlich zur Referenzarchitektur für modellbasierte Analysen haben wir wiederkehrende Strukturen identifiziert, die zu Problemen bei der Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit führen; in der Literatur werden diese wiederkehrenden Strukturen auch als Bad Smells bezeichnet. Wir haben etablierte modellbasierte Analysen untersucht und dreizehn Bad Smells identifiziert und spezifiziert. Neben der Spezifizierung der Bad Smells bieten wir einen Prozess zur automatischen Identifizierung dieser Bad Smells und Strategien für deren Refactoring, damit Entwickler diese Bad Smells vermeiden oder beheben können. In dieser Arbeit haben wir auch eine Modellierungssprache zur Spezifikation der Struktur und des Verhaltens von Simulationskomponenten entwickelt. Simulationen sind Analysen, um ein System zu untersuchen, wenn das Experimentieren mit dem bestehenden System zu zeitaufwändig, zu teuer, zu gefährlich oder einfach unmöglich ist, weil das System (noch) nicht existiert. Entwickler können die Spezifikation nutzen, um Simulationskomponenten zu vergleichen und so identische Komponenten zu identifizieren. Validierung: Die Referenzarchitektur für modellbasierte Analysen, haben wir evaluiert, indem wir vier modellbasierte Analysen in die Referenzarchitektur überführt haben. Wir haben eine szenariobasierte Evaluierung gewählt, die historische Änderungsszenarien aus den Repositories der modellbasierten Analysen ableitet. In der Auswertung können wir zeigen, dass sich die Evolvierbarkeit und Verständlichkeit durch die Bestimmung der Komplexität, der Kopplung und der Kohäsion verbessert. Die von uns verwendeten Metriken stammen aus der Informationstheorie, wurden aber bereits zur Bewertung der Referenzarchitektur für DSMLs verwendet. Die Bad Smells, die durch die Co-Abhängigkeit von modellbasierten Analysen und ihren entsprechenden DSMLs entstehen, haben wir evaluiert, indem wir vier modellbasierte Analysen nach dem Auftreten unserer schlechten Gerüche durchsucht und dann die gefundenen Bad Smells behoben haben. Wir haben auch eine szenariobasierte Auswertung gewählt, die historische Änderungsszenarien aus den Repositories der modellbasierten Analysen ableitet. Wir können zeigen, dass die Bad Smells die Evolvierbarkeit und Verständlichkeit negativ beeinflussen, indem wir die Komplexität, Kopplung und Kohäsion vor und nach der Refaktorisierung bestimmen. Den Ansatz zum Spezifizieren und Finden von Komponenten modellbasierter Analysen haben wir evaluiert, indem wir Komponenten von zwei modellbasierten Analysen spezifizieren und unseren Suchalgorithmus verwenden, um ähnliche Analysekomponenten zu finden. Die Ergebnisse der Evaluierung zeigen, dass wir in der Lage sind, ähnliche Analysekomponenten zu finden und dass unser Ansatz die Suche nach Analysekomponenten mit ähnlicher Struktur und ähnlichem Verhalten und damit die Wiederverwendung solcher Komponenten ermöglicht. Nutzen: Die Beiträge unserer Arbeit unterstützen Architekten und Entwickler bei ihrer täglichen Arbeit, um wartbare und wiederverwendbare modellbasierte Analysen zu entwickeln. Zu diesem Zweck stellen wir eine Referenzarchitektur bereit, die die modellbasierte Analyse und die domänenspezifische Modellierungssprache aufeinander abstimmt und so die Koevolution erleichtert. Zusätzlich zur Referenzarchitektur bieten wir auch Refaktorisierungsoperationen an, die es Architekten und Entwicklern ermöglichen, eine bestehende modellbasierte Analyse an die Referenzarchitektur anzupassen. Zusätzlich zu diesem technischen Aspekt haben wir drei Prozesse identifiziert, die es Architekten und Entwicklern ermöglichen, eine neue modellbasierte Analyse zu entwickeln, eine bestehende modellbasierte Analyse zu modularisieren und eine bestehende modellbasierte Analyse zu erweitern. Dies geschieht natürlich so, dass die Ergebnisse mit der Referenzarchitektur konform sind. Darüber hinaus ermöglicht unsere Spezifikation den Entwicklern, bestehende Simulationskomponenten zu vergleichen und sie bei Bedarf wiederzuverwenden. Dies erspart den Entwicklern die Neuimplementierung von Komponenten

    Improving design coordination in computer supported environments in SMEs : implementation of a tool for capturing and analysing collaboration between actors

    Get PDF
    To remain competitive in a context of multi-partner projects companies are increasingly concerned with the coordination of design projects. Information systems such as PLM or CSCW are implemented to support the coordination of product information flows. Project managers are nevertheless finding it increasingly difficult to manage projects effectively. The impact of collaboration aspects on the design process is especially difficult for them to evaluate. Indeed, failing to integrate collaboration aspects into coordination can account for a great deal of design mistakes and finding a solution could lead to improved design coordination. The main obj ective of this researchi s then to help project managersi mprove coordination in design processes through a detailed analysis of collaboration between actors. A model of coordination and an associated model of collaboration have been devised together with a tool ('CoCa') to be used by researchers, consultants or project managers in the analysis of collaboration. This analysis can lead to the understanding of collaboration aspects and identification of the problems caused. Consequently, guidelines can be defined to prevent the re-emergence of the identified design problems in new projects. These guidelines are recommendations to introduce collaborative aspects, flexibility in the design process and elements for decision making when defining future design situations. Finally, a study of a specific application implementing PLM tools demonstrates that they are not able to manage firstly design projects and human resources whilst taking into account collaborative aspects or, secondly, the necessary synchronisation between human design activities and document workflow tasks. It is thus evident that these two factors are needed in PLM tools in order to apply the proposed model of coordination. An industrial partnership with an SME led to the study of its information system, an experiment with the CoCa tool, practical design process improvements, and implementation of a PLM prototype.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    The future of banking in South Africa towards 2055: disruptive innovation scenarios

    Get PDF
    The research effort developed four possible scenarios for the future of banking in South Africa towards 2055. The scenarios sought to stimulate thought on the possible, probable, plausible and preferred effects of disruptive innovation and regulation in the South African banking sector. The scenarios were developed in strict accordance with the 5 stages, and 9 steps, of the scenario-based planning process of futures studies. A conceptual futures studies model for banking in South Africa was developed to guide and clarify the way in which the research on South African banking can be integrated into the body of existing futures studies theory. The research study began with a comprehensive environmental scan, where various megatrends and driving forces are identified. A PESTEL analysis provided a deeper understanding of the driving forces. A Real-Time Delphi study was conducted in order to validate and prioritise the megatrends and driving forces that emerged. As a result, the research study was able to present four plausible scenarios that provide a better understanding of the future of banking in South Africa over the decades to come. The research presents banking as a complex, multi-faceted sector that is heavily influenced by advances in technology. The Real-Time Delphi research allowed the aggregation of expert knowledge. This is used as a guide to assist decision-makers and industry leaders in the adoption of appropriate business models and strategies towards a preferred future state. The research defined the Integrated Vision as the preferred future state for the South African banking sector towards 2055. The study closes a research gap where current strategies deviate from proposed strategies that drive the achievement of the Integrated Vision by 2055. Finally, contextually aligned practical recommendations are provided to assist decision-makers, industry leaders and change agents to work towards a preferable future state. The proposed recommendations are placed into broad categories of innovation, financial inclusion and collaborative regulatory relationships. The research makes a meaningful contribution to the South African banking sector by introducing a forward-looking, systems-thinking approach to disruptive innovation and regulation in the South African context

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution
    corecore