5 research outputs found

    Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: a Goal-Oriented Approach

    Full text link
    Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems

    Requirements Conflict Detection and Resolution in AREM Using Intelligence System Approach

    Get PDF
    Requirements engineering (RE) is the process of defining user requirements that are used as the main reference in the system development process. The quality of the RE results is measured based on the consistency and completeness of the requirements. The collection of requirements from multiple stakeholders can cause requirements conflict and have an impact on the inconsistency and incompleteness of the resulting requirements model. In this study, a method for automatic conflict detection and resolution in the Automatic Requirements Engineering Model (AREM) was developed. AREM is a model that automates the process of elicitation, analysis, validation, and requirements specification. The requirement conflict detection method was developed using an intelligent agent approach combined with a Weighted Product approach. Meanwhile, Conflict resolution is made automatically using a rule-based model and clustering method. Testing the ability of the method to detect and resolve conflicting requirements was carried out through five data sets of requirements from five system development projects. Based on the test results, it is known that the system is able to produce a set of objects that have conflicts in the data requirements. For conflict resolution, experiments were conducted with five conflict resolution scenarios. The experimental results show that the method is able to resolve conflicts by producing the highest completeness value, but the results of conflict resolution also produce a number of soft goals. The success of the method in detecting and resolving conflicts in the model is able to overcome the problem of inconsistencies and incompleteness in the requirements model

    Runtime Monitoring and Resolution of Probabilistic Obstacles to System Goals

    No full text
    Software systems are deployed in environments that keep changing over time. They should therefore adapt to changing conditions in order to meet their requirements. The satisfaction rate of these requirements depends on the rate at which adverse conditions prevent their satisfaction. Obstacle analysis is a goal-oriented form of risk analysis for requirements engineering (RE) whereby obstacles to system goals are identified, assessed, and resolved through countermeasures yielding new requirements. The selection of appropriate countermeasures relies on the assessed likelihood and criticality of obstacles together with environmental assumptions. These various factors are estimated at RE time; they may however evolve during software development and at system runtime. To meet the system’s goals under changing conditions, the paper proposes to defer obstacle resolution to system runtime. Following Monitor–Analyze–Plan–Execute cycles, techniques are presented for monitoring goal/obstacle satisfaction rates; deciding when adaptation should be triggered; and adapting the system on the fly to countermeasures that are more appropriate under the monitored conditions. The approach relies on a model where goals and obstacles are refined and specified in a probabilistic linear temporal logic. The proposed techniques allow for (a) monitoring the satisfaction rate of probabilistic leaf obstacles; (b) determining the severity of their consequences by up-propagating satisfaction rates through refinement trees from leaf obstacles to high-level probabilistic goals; and (c) dynamically shifting to alternative countermeasures that better meet the required satisfaction rate of the system’s high-level goals under imposed cost constraints. Our approach is evaluated on fragments of an ambulance dispatching system

    Runtime Monitoring and Resolution of Probabilistic Obstacles to System Goals

    No full text
    Software systems are deployed in environments that keep changing over time. They should therefore adapt to changing conditions in order to meet their requirements. The satisfaction rate of these requirements depends on the rate at which adverse conditions prevent their satisfaction. Obstacle analysis is a goal-oriented form of risk analysis for requirements engineering (RE) whereby obstacles to system goals are identified, assessed, and resolved through countermeasures. The selection of appropriate countermeasures relies on environmental assumptions and on the assessed likelihood and criticality of the corresponding obstacles. Those various factors estimated at RE time may, however, evolve at system runtime. To meet the system’s goals under changing conditions, this article proposes to defer obstacle resolution to system runtime. Techniques are presented for monitoring obstacle satisfaction rates; deciding when adaptation should be triggered; and adapting the system on-the-fly to countermeasures that are more effective. The approach relies on a model where goals and obstacles are refined and specified in a probabilistic linear temporal logic. The techniques allow for monitoring the satisfaction rate of probabilistic leaf obstacles; determining the severity of obstacle consequences on goal satisfaction rates computed from the monitored obstacle satisfaction rates; and shifting to countermeasures that better meet the required goal satisfaction rate. Our approach is evaluated on fragments of an ambulance dispatching system
    corecore