239 research outputs found

    Cyber-Physical Systems: a multi-criteria assessment for Internet-of-Things (IoT) systems

    Get PDF
    This research work was partially supported by funds provided by the European Commission in the scope of FoF/H2020-636909 C2NET, FoF/H2020-723710 vf-OS and ICT/H2020-825631 ZDMP.This article addresses a multi-criteria decision problem regarding the more suitable device (system) to perform a task for cyber-physical systems. New embedded systems provided everyday makes engineers’ decision very difficult. Components are proposed to formally describe solutions, criteria, constraints and priorities, taking into account users’ specific aspects. To materialise all formal descriptions, a model-driven approach is followed, allowing the design of enablers for interoperability with standards. It is enabled the use of different software languages and decision methods. Proposed framework enables a better Internet-of-Things system selection, and therefore stakeholders can perform a more suitable design of their cyber-physical enterprise systems.authorsversioninpres

    Finalised dependability framework and evaluation results

    Get PDF
    The ambitious aim of CONNECT is to achieve universal interoperability between heterogeneous Networked Systems by means of on-the-fly synthesis of the CONNECTors through which they communicate. The goal of WP5 within CONNECT is to ensure that the non-functional properties required at each side of the connection going to be established are fulfilled, including dependability, performance, security and trust, or, in one overarching term, CONNECTability. To model such properties, we have introduced the CPMM meta-model which establishes the relevant concepts and their relations, and also includes a Complex Event language to express the behaviour associated with the specified properties. Along the four years of project duration, we have developed approaches for assuring CONNECTability both at synthesis time and at run-time. Within CONNECT architecture, these approaches are supported via the following enablers: the Dependability and Performance analysis Enabler, which is implemented in a modular architecture supporting stochastic verification and state-based analysis. Dependability and performance analysis also relies on approaches for incremental verification to adjust CONNECTor parameters at run-time; the Security Enabler, which implements a Security-by-Contract-with-Trust framework to guarantee the expected security policies and enforce them accordingly to the level of trust; the Trust Manager that implements a model-based approach to mediate between different trust models and ensure interoperable trust management. The enablers have been integrated within the CONNECT architecture, and in particular can interact with the CONNECT event-based monitoring enabler (GLIMPSE Enabler released within WP4) for run-time analysis and verification. To support a Model-driven approach in the interaction with the monitor, we have developed a CPMM editor and a translator from CPMM to the GLIMPSE native language (Drools). In this document that is the final deliverable from WP5 we first present the latest advances in the fourth year concerning CPMM, Dependability&Performance Analysis, Incremental Verification and Security. Then, we make an overall summary of main achievements for the whole project lifecycle. In appendix we also include some relevant articles specifically focussing on CONNECTability that have been prepared in the last period

    Project Final Report Use and Dissemination of Foreground

    Get PDF
    This document is the final report on use and dissemination of foreground, part of the CONNECT final report. The document provides the lists of: publications, dissemination activities, and exploitable foregroun

    SAM-SoS: A stochastic software architecture modeling and verification approach for complex System-of-Systems

    Get PDF
    A System-of-Systems (SoS) is a complex, dynamic system whose Constituent Systems (CSs) are not known precisely at design time, and the environment in which they operate is uncertain. SoS behavior is unpredictable due to underlying architectural characteristics such as autonomy and independence. Although the stochastic composition of CSs is vital to achieving SoS missions, their unknown behaviors and impact on system properties are unavoidable. Moreover, unknown conditions and volatility have significant effects on crucial Quality Attributes (QAs) such as performance, reliability and security. Hence, the structure and behavior of a SoS must be modeled and validated quantitatively to foresee any potential impact on the properties critical for achieving the missions. Current modeling approaches lack the essential syntax and semantics required to model and verify SoS behaviors at design time and cannot offer alternative design choices for better design decisions. Therefore, the majority of existing techniques fail to provide qualitative and quantitative verification of SoS architecture models. Consequently, we have proposed an approach to model and verify Non-Deterministic (ND) SoS in advance by extending the current algebraic notations for the formal models as a hybrid stochastic formalism to specify and reason architectural elements with the required semantics. A formal stochastic model is developed using a hybrid approach for architectural descriptions of SoS with behavioral constraints. Through a model-driven approach, stochastic models are then translated into PRISM using formal verification rules. The effectiveness of the approach has been tested with an end-to-end case study design of an emergency response SoS for dealing with a fire situation. Architectural analysis is conducted on the stochastic model, using various qualitative and quantitative measures for SoS missions. Experimental results reveal critical aspects of SoS architecture model that facilitate better achievement of missions and QAs with improved design, using the proposed approach

    Interoperability of Enterprise Software and Applications

    Get PDF

    TTSS'11 - 5th International Workshop on Harnessing Theories for Tool Support in Software

    Get PDF
    The aim of the workshop is to bring together practitioners and researchers from academia, industry and government to present and discuss ideas about: • How to deal with the complexity of software projects by multi-view modeling and separation of concerns about the design of functionality, interaction, concurrency, scheduling, and nonfunctional requirements, and • How to ensure correctness and dependability of software by integrating formal methods and tools for modeling, design, verification and validation into design and development processes and environments. • Case studies and experience reports about harnessing static analysis tools such as model checking, theorem proving, testing, as well as runtime monitoring

    Adding Executable Context to Executable Architectures: Enabling an Executable Context Simulation Framework (ECSF)

    Get PDF
    A system that does not stand alone is represented by a complex entity of component combinations that interact with each other to execute a function. In today\u27s interconnected world, systems integrate with other systems - called a system-of-systems infrastructure: a network of interrelated systems that can often exhibit both predictable and unpredictable behavior. The current state-of-the-art evaluation process of these system-of-systems and their community of practitioners in the academic community are limited to static methods focused on defining who is doing what and where. However, to answer the questions of why and how a system operates within complex systems-of-systems interrelationships, a system\u27s architecture and context must be observed over time, its executable architecture, to discern effective predictable and unpredictable behavior. The objective of this research is to determine a method for evaluating a system\u27s executable architecture and assess the contribution and efficiency of the specified system before it is built. This research led to the development of concrete steps that synthesize the observance of the executable architecture, assessment recommendations provided by the North Atlantic Treaty Organization (NATO) Code of Best Practice for Command and Control (C2) Assessment, and the metrics for operational efficiency provided by the Military Missions and Means Framework. Based on the research herein, this synthesis is designed to evaluate and assess system-of-systems architectures in their operational context to provide quantitative results

    Workshop proceedings of the 1st workshop on quality in modeling

    Get PDF
    Quality assessment and assurance constitute an important part of software engineering. The issues of software quality management are widely researched and approached from multiple perspectives and viewpoints. The introduction of a new paradigm in software development – namely Model Driven Development (MDD) and its variations (e.g., MDA [Model Driven Architecture], MDE [Model Driven Engineering], MBD [Model Based Development], MIC [Model Integrated Computing]) – raises new challenges in software quality management, and as such should be given a special attention. In particular, the issues of early quality assessment, based on models at a high abstraction level, and building (or customizing the existing) prediction models for software quality based on model metrics are of central importance for the software engineering community. The workshop is continuation of a series of workshops on consistency that have taken place during the subsequent annual UML conferences and recently MDA-FA. The idea behind this workshop is to extend the scope of interests and address a wide spectrum of problems related to MDD. It is also in line with the overall initiative of the shift from UML to MoDELS. The goal of this workshop is to gather researchers and practitioners interested in the emerging issues of quality in the context of MDD. The workshop is intended to provide a premier forum for discussions related to software quality and MDD. And the aims of the workshop are: - Presenting ongoing research related to quality in modeling in the context of MDD, - Defining and organizing issues related to quality in the MDD. The format of the workshop consists of two parts: presentation and discussion. The presentation part is aimed at reporting research results related to quality aspects in modeling. Seven papers were selected for the presentation out of 16 submissions; the selected papers are included in these proceedings. The discussion part is intended to be a forum for exchange of ideas related to understanding of quality and approaching it in a systematic way

    Workshop proceedings of the 1st workshop on quality in modeling

    Get PDF
    Quality assessment and assurance constitute an important part of software engineering. The issues of software quality management are widely researched and approached from multiple perspectives and viewpoints. The introduction of a new paradigm in software development – namely Model Driven Development (MDD) and its variations (e.g., MDA [Model Driven Architecture], MDE [Model Driven Engineering], MBD [Model Based Development], MIC [Model Integrated Computing]) – raises new challenges in software quality management, and as such should be given a special attention. In particular, the issues of early quality assessment, based on models at a high abstraction level, and building (or customizing the existing) prediction models for software quality based on model metrics are of central importance for the software engineering community. The workshop is continuation of a series of workshops on consistency that have taken place during the subsequent annual UML conferences and recently MDA-FA. The idea behind this workshop is to extend the scope of interests and address a wide spectrum of problems related to MDD. It is also in line with the overall initiative of the shift from UML to MoDELS. The goal of this workshop is to gather researchers and practitioners interested in the emerging issues of quality in the context of MDD. The workshop is intended to provide a premier forum for discussions related to software quality and MDD. And the aims of the workshop are: - Presenting ongoing research related to quality in modeling in the context of MDD, - Defining and organizing issues related to quality in the MDD. The format of the workshop consists of two parts: presentation and discussion. The presentation part is aimed at reporting research results related to quality aspects in modeling. Seven papers were selected for the presentation out of 16 submissions; the selected papers are included in these proceedings. The discussion part is intended to be a forum for exchange of ideas related to understanding of quality and approaching it in a systematic way

    D7.5 FIRST consolidated project results

    Get PDF
    The FIRST project commenced in January 2017 and concluded in December 2022, including a 24-month suspension period due to the COVID-19 pandemic. Throughout the project, we successfully delivered seven technical reports, conducted three workshops on Key Enabling Technologies for Digital Factories in conjunction with CAiSE (in 2019, 2020, and 2022), produced a number of PhD theses, and published over 56 papers (and numbers of summitted journal papers). The purpose of this deliverable is to provide an updated account of the findings from our previous deliverables and publications. It involves compiling the original deliverables with necessary revisions to accurately reflect the final scientific outcomes of the project
    • …
    corecore