1,250 research outputs found

    Uncertainty representation in software models: a survey

    Get PDF
    This paper provides a comprehensive overview and analysis of research work on how uncertainty is currently represented in software models. The survey presents the definitions and current research status of different proposals for addressing uncertainty modeling and introduces a classification framework that allows to compare and classify existing proposals, analyze their current status and identify new trends. In addition, we discuss possible future research directions, opportunities and challenges.This work is partially supported by the European Commission (FEDER) and the Spanish Government under projects APOLO (US1264651), HORATIO (RTI2018-101204-B-C21), EKIPMENT-PLUS (P18-FR-2895) and COSCA (PGC2018-094905-B-I00)

    Towards development of fuzzy spatial datacubes : fundamental concepts with example for multidimensional coastal erosion risk assessment and representation

    Get PDF
    Les systèmes actuels de base de données géodécisionnels (GeoBI) ne tiennent généralement pas compte de l'incertitude liée à l'imprécision et le flou des objets; ils supposent que les objets ont une sémantique, une géométrie et une temporalité bien définies et précises. Un exemple de cela est la représentation des zones à risque par des polygones avec des limites bien définies. Ces polygones sont créés en utilisant des agrégations d'un ensemble d'unités spatiales définies sur soit des intérêts des organismes responsables ou les divisions de recensement national. Malgré la variation spatio-temporelle des multiples critères impliqués dans l’analyse du risque, chaque polygone a une valeur unique de risque attribué de façon homogène sur l'étendue du territoire. En réalité, la valeur du risque change progressivement d'un polygone à l'autre. Le passage d'une zone à l'autre n'est donc pas bien représenté avec les modèles d’objets bien définis (crisp). Cette thèse propose des concepts fondamentaux pour le développement d'une approche combinant le paradigme GeoBI et le concept flou de considérer la présence de l’incertitude spatiale dans la représentation des zones à risque. En fin de compte, nous supposons cela devrait améliorer l’analyse du risque. Pour ce faire, un cadre conceptuel est développé pour créer un model conceptuel d’une base de donnée multidimensionnelle avec une application pour l’analyse du risque d’érosion côtier. Ensuite, une approche de la représentation des risques fondée sur la logique floue est développée pour traiter l'incertitude spatiale inhérente liée à l'imprécision et le flou des objets. Pour cela, les fonctions d'appartenance floues sont définies en basant sur l’indice de vulnérabilité qui est un composant important du risque. Au lieu de déterminer les limites bien définies entre les zones à risque, l'approche proposée permet une transition en douceur d'une zone à une autre. Les valeurs d'appartenance de plusieurs indicateurs sont ensuite agrégées basées sur la formule des risques et les règles SI-ALORS de la logique floue pour représenter les zones à risque. Ensuite, les éléments clés d'un cube de données spatiales floues sont formalisés en combinant la théorie des ensembles flous et le paradigme de GeoBI. En plus, certains opérateurs d'agrégation spatiale floue sont présentés. En résumé, la principale contribution de cette thèse se réfère de la combinaison de la théorie des ensembles flous et le paradigme de GeoBI. Cela permet l’extraction de connaissances plus compréhensibles et appropriées avec le raisonnement humain à partir de données spatiales et non-spatiales. Pour ce faire, un cadre conceptuel a été proposé sur la base de paradigme GéoBI afin de développer un cube de données spatiale floue dans le system de Spatial Online Analytical Processing (SOLAP) pour évaluer le risque de l'érosion côtière. Cela nécessite d'abord d'élaborer un cadre pour concevoir le modèle conceptuel basé sur les paramètres de risque, d'autre part, de mettre en œuvre l’objet spatial flou dans une base de données spatiales multidimensionnelle, puis l'agrégation des objets spatiaux flous pour envisager à la représentation multi-échelle des zones à risque. Pour valider l'approche proposée, elle est appliquée à la région Perce (Est du Québec, Canada) comme une étude de cas.Current Geospatial Business Intelligence (GeoBI) systems typically do not take into account the uncertainty related to vagueness and fuzziness of objects; they assume that the objects have well-defined and exact semantics, geometry, and temporality. Representation of fuzzy zones by polygons with well-defined boundaries is an example of such approximation. This thesis uses an application in Coastal Erosion Risk Analysis (CERA) to illustrate the problems. CERA polygons are created using aggregations of a set of spatial units defined by either the stakeholders’ interests or national census divisions. Despite spatiotemporal variation of the multiple criteria involved in estimating the extent of coastal erosion risk, each polygon typically has a unique value of risk attributed homogeneously across its spatial extent. In reality, risk value changes gradually within polygons and when going from one polygon to another. Therefore, the transition from one zone to another is not properly represented with crisp object models. The main objective of the present thesis is to develop a new approach combining GeoBI paradigm and fuzzy concept to consider the presence of the spatial uncertainty in the representation of risk zones. Ultimately, we assume this should improve coastal erosion risk assessment. To do so, a comprehensive GeoBI-based conceptual framework is developed with an application for Coastal Erosion Risk Assessment (CERA). Then, a fuzzy-based risk representation approach is developed to handle the inherent spatial uncertainty related to vagueness and fuzziness of objects. Fuzzy membership functions are defined by an expert-based vulnerability index. Instead of determining well-defined boundaries between risk zones, the proposed approach permits a smooth transition from one zone to another. The membership values of multiple indicators (e.g. slop and elevation of region under study, infrastructures, houses, hydrology network and so on) are then aggregated based on risk formula and Fuzzy IF-THEN rules to represent risk zones. Also, the key elements of a fuzzy spatial datacube are formally defined by combining fuzzy set theory and GeoBI paradigm. In this regard, some operators of fuzzy spatial aggregation are also formally defined. The main contribution of this study is combining fuzzy set theory and GeoBI. This makes spatial knowledge discovery more understandable with human reasoning and perception. Hence, an analytical conceptual framework was proposed based on GeoBI paradigm to develop a fuzzy spatial datacube within Spatial Online Analytical Processing (SOLAP) to assess coastal erosion risk. This necessitates developing a framework to design a conceptual model based on risk parameters, implementing fuzzy spatial objects in a spatial multi-dimensional database, and aggregating fuzzy spatial objects to deal with multi-scale representation of risk zones. To validate the proposed approach, it is applied to Perce region (Eastern Quebec, Canada) as a case study

    Many-Objective Optimization of Non-Functional Attributes based on Refactoring of Software Models

    Full text link
    Software quality estimation is a challenging and time-consuming activity, and models are crucial to face the complexity of such activity on modern software applications. In this context, software refactoring is a crucial activity within development life-cycles where requirements and functionalities rapidly evolve. One main challenge is that the improvement of distinctive quality attributes may require contrasting refactoring actions on software, as for trade-off between performance and reliability (or other non-functional attributes). In such cases, multi-objective optimization can provide the designer with a wider view on these trade-offs and, consequently, can lead to identify suitable refactoring actions that take into account independent or even competing objectives. In this paper, we present an approach that exploits NSGA-II as the genetic algorithm to search optimal Pareto frontiers for software refactoring while considering many objectives. We consider performance and reliability variations of a model alternative with respect to an initial model, the amount of performance antipatterns detected on the model alternative, and the architectural distance, which quantifies the effort to obtain a model alternative from the initial one. We applied our approach on two case studies: a Train Ticket Booking Service, and CoCoME. We observed that our approach is able to improve performance (by up to 42\%) while preserving or even improving the reliability (by up to 32\%) of generated model alternatives. We also observed that there exists an order of preference of refactoring actions among model alternatives. We can state that performance antipatterns confirmed their ability to improve performance of a subject model in the context of many-objective optimization. In addition, the metric that we adopted for the architectural distance seems to be suitable for estimating the refactoring effort.Comment: Accepted for publication in Information and Software Technologies. arXiv admin note: substantial text overlap with arXiv:2107.0612

    Toward a decision support system for the clinical pathways assessment

    Get PDF
    This paper presents a decision support system to be used in hospital management taskswhich is based on the clinical pathways. We propose a very simple graphical modeling lan-guage based on a small number of primitive elements through which the medical doctorscould introduce a clinical pathway for a specific disease. Three essential aspects relatedto a clinical pathway can be specified in this language: (1) patient flow; (2) resource uti-lization; and (3) information interchange. This high-level language is a domain specificmodeling language calledHealthcare System Specification (HSS), and it is defined as anUnified Modeling Language (UML) profile. A model to model transformation is also pro-posed in order to obtain, from the pathways HSS specification, a Stochastic Well-formedNet (SWN) model that enables a formal analysis of the modeled system and, if needed, toapply synthesis methods enforcing specified requirements. The transformation is based onthe application of local rules. The clinical pathway of hip fracture from the “Lozano Blesa”University hospital in Zaragoza is taken as an example

    Software framework for the development of context-aware reconfigurable systems

    Get PDF
    In this project we propose a new software framework for the development of context-aware and secure controlling software of distributed reconfigurable systems. Context-awareness is a key feature allowing the adaptation of systems behaviour according to the changing environment. We introduce a new definition of the term “context” for reconfigurable systems then we define a new context modelling and reasoning approach. Afterwards, we define a meta-model of context-aware reconfigurable applications that paves the way to the proposed framework. The proposed framework has a three-layer architecture: reconfiguration, context control, and services layer, where each layer has its well-defined role. We define also a new secure conversation protocol between distributed trustless parts based on the blockchain technology as well as the elliptic curve cryptography. To get better correctness and deployment guarantees of applications models in early development stages, we propose a new UML profile called GR-UML to add new semantics allowing the modelling of probabilistic scenarios running under memory and energy constraints, then we propose a methodology using transformations between the GR-UML, the GR-TNCES Petri nets formalism, and the IEC 61499 function blocks. A software tool implementing the methodology concepts is developed. To show the suitability of the mentioned contributions two case studies (baggage handling system and microgrids) are considered.In diesem Projekt schlagen wir ein Framework für die Entwicklung von kontextbewussten, sicheren Anwendungen von verteilten rekonfigurierbaren Systemen vor. Kontextbewusstheit ist eine Schlüsseleigenschaft, die die Anpassung des Systemverhaltens an die sich ändernde Umgebung ermöglicht. Wir führen eine Definition des Begriffs ``Kontext" für rekonfigurierbare Systeme ein und definieren dann einen Kontextmodellierungs- und Reasoning-Ansatz. Danach definieren wir ein Metamodell für kontextbewusste rekonfigurierbare Anwendungen, das den Weg zum vorgeschlagenen Framework ebnet. Das Framework hat eine dreischichtige Architektur: Rekonfigurations-, Kontextkontroll- und Dienste-Schicht, wobei jede Schicht ihre wohldefinierte Rolle hat. Wir definieren auch ein sicheres Konversationsprotokoll zwischen verteilten Teilen, das auf der Blockchain-Technologie sowie der elliptischen Kurven-Kryptographie basiert. Um bessere Korrektheits- und Einsatzgarantien für Anwendungsmodelle zu erhalten, schlagen wir ein UML-Profil namens GR-UML vor, um Semantik umzufassen, die die Modellierung probabilistischer Szenarien unter Speicher- und Energiebeschränkungen ermöglicht. Dann schlagen wir eine Methodik vor, die Transformationen zwischen GR-UML, dem GR-TNCES-Petrinetz-Formalismus und den IEC 61499-Funktionsblöcken verwendet. Es wird ein Software entwickelt, das die Konzepte der Methodik implementiert. Um die Eignung der genannten Beiträge zu zeigen, werden zwei Fallstudien betrachtet

    ARPHA: an FDIR architecture for Autonomous Spacecrafts based on Dynamic Probabilistic Graphical Models

    Get PDF
    This paper introduces a formal architecture for on-board diagnosis, prognosis and recovery called ARPHA. ARPHA is designed as part of the ESA/ESTEC study called VERIFIM (Veri\ufb01cation of Failure Impact by Model checking). The goal is to allow the design of an innovative on-board FDIR process for autonomous systems, able to deal with uncertain system/environment interactions, uncertain dynamic system evolution, partial observability and detection of recovery actions taking into account imminent failures. We show how the model needed by ARPHA can be built through a standard fault analysis phase, \ufb01nally producing an extended version of a fault tree called EDFT; we discuss how EDFT can be adopted as a formal language to represent the needed FDIR knowledge, that can be compiled into a corresponding Dynamic Decision Network to be used for the analysis. We also discuss the software architecture we are implementing following this approach, where on-board FDIR can be implemented by exploiting on-line inference based on the junction tree approach typical of probabilisticgraphical models

    Uncertainty in coupled models of cyber-physical systems

    Get PDF
    The development of cyber-physical systems typically involves the association between multiple coupled models that capture different aspects of the system and the environment where it operates. Due to the dynamic aspect of the environment, unexpected conditions and uncertainty may impact the system. In this work, we tackle this problem and propose a taxonomy for characterizing uncertainty in coupled models. Our taxonomy extends existing proposals to cope with the particularities of coupled models in cyber-physical systems. In addition, our taxonomy discusses the notion of uncertainty propagation to other parts of the system. This allows for studying and (in some cases) quantifying the effects of uncertainty on other models in a system even at design time. We show the applicability of our uncertainty taxonomy in real use cases motivated by our envisioned scenario of automotive development

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Search-based system architecture development using a holistic modeling approach

    Get PDF
    This dissertation presents an innovative approach to system architecting where search algorithms are used to explore design trade space for good architecture alternatives. Such an approach is achieved by integrating certain model construction, alternative generation, simulation, and assessment processes into a coherent and automated framework. This framework is facilitated by a holistic modeling approach that combines the capabilities of Object Process Methodology (OPM), Colored Petri Net (CPN), and feature model. The resultant holistic model can not only capture the structural, behavioral, and dynamic aspects of a system, allowing simulation and strong analysis methods to be applied, it can also specify the architectural design space. Both object-oriented analysis and design (OOA/D) and domain engineering were exploited to capture design variables and their domains and define architecture generation operations. A fully realized framework (with genetic algorithms as the search algorithm) was developed. Both the proposed framework and its suggested implementation, including the proposed holistic modeling approach and architecture alternative generation operations, are generic. They are targeted at systems that can be specified using object-oriented or process-oriented paradigm. The broad applicability of the proposed approach is demonstrated on two examples. One is the configuration of reconfigurable manufacturing systems (RMSs) under multi-objective optimization and the other is the architecture design of a manned lunar landing system for the Apollo program. The test results show that the proposed approach can cover a huge number of architecture alternatives and support the assessment of several performance measures. A set of quality results was obtained after running the optimization algorithm following the proposed framework --Abstract, page iii

    A Framework for the Verification and Validation of Artificial Intelligence Machine Learning Systems

    Get PDF
    An effective verification and validation (V&V) process framework for the white-box and black-box testing of artificial intelligence (AI) machine learning (ML) systems is not readily available. This research uses grounded theory to develop a framework that leads to the most effective and informative white-box and black-box methods for the V&V of AI ML systems. Verification of the system ensures that the system adheres to the requirements and specifications developed and given by the major stakeholders, while validation confirms that the system properly performs with representative users in the intended environment and does not perform in an unexpected manner. Beginning with definitions, descriptions, and examples of ML processes and systems, the research results identify a clear and general process to effectively test these systems. The developed framework ensures the most productive and accurate testing results. Formerly, and occasionally still, the system definition and requirements exist in scattered documents that make it difficult to integrate, trace, and test through V&V. Modern system engineers along with system developers and stakeholders collaborate to produce a full system model using model-based systems engineering (MBSE). MBSE employs a Unified Modeling Language (UML) or System Modeling Language (SysML) representation of the system and its requirements that readily passes from each stakeholder for system information and additional input. The comprehensive and detailed MBSE model allows for direct traceability to the system requirements. xxiv To thoroughly test a ML system, one performs either white-box or black-box testing or both. Black-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is unknown to the test engineer. Testers and analysts are simply looking at performance of the system given input and output. White-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is known to the test engineer. When possible, test engineers and analysts perform both black-box and white-box testing. However, sometimes testers lack authorization to access the internal structure of the system. The researcher captures this decision in the ML framework. No two ML systems are exactly alike and therefore, the testing of each system must be custom to some degree. Even though there is customization, an effective process exists. This research includes some specialized methods, based on grounded theory, to use in the testing of the internal structure and performance. Through the study and organization of proven methods, this research develops an effective ML V&V framework. Systems engineers and analysts are able to simply apply the framework for various white-box and black-box V&V testing circumstances
    • …
    corecore