163 research outputs found

    Synthesis of behavioral models from scenarios

    No full text

    Monitoring and control in scenario-based requirements analysis

    Get PDF
    Scenarios are an effective means for eliciting, validating and documenting requirements. At the requirements level, scenarios describe sequences of interactions between the software-to-be and agents in the environment. Interactions correspond to the occurrence of an event that is controlled by one agent and monitored by another.This paper presents a technique to analyse requirements-level scenarios for unforeseen, potentially harmful, consequences. Our aim is to perform analysis early in system development, where it is highly cost-effective. The approach recognises the importance of monitoring and control issues and extends existing work on implied scenarios accordingly. These so-called input-output implied scenarios expose problematic behaviours in scenario descriptions that cannot be detected using standard implied scenarios. Validation of these implied scenarios supports requirements elaboration. We demonstrate the relevance of input-output implied scenarios using a number of examples

    Using contexts to extract models from code

    No full text
    Behaviour models facilitate the understanding and analysis of software systems by providing an abstract view of their behaviours and also by enabling the use of validation and verification techniques to detect errors. However, depending on the size and complexity of these systems, constructing models may not be a trivial task, even for experienced developers. Model extraction techniques can automatically obtain models from existing code, thus reducing the effort and expertise required of engineers and helping avoid errors often present in manually constructed models. Existing approaches for model extraction often fail to produce faithful models, either because they only consider static information, which may include infeasible behaviours, or because they are based only on dynamic information, thus relying on observed executions, which usually results in incomplete models. This paper describes a model extraction approach based on the concept of contexts, which are abstractions of concrete states of a program, combining static and dynamic information. Contexts merge some of the advantages of using either type of information and, by their combination, can overcome some of their problems. The approach is partially implemented by a tool called LTS Extractor, which translates information collected from execution traces produced by instrumented Java code to labelled transition systems (LTS), which can be analysed in an existing verification tool. Results from case studies are presented and discussed, showing that, considering a certain level of abstraction and a set of execution traces, the produced models are correct descriptions of the programs from which they were extracted. Thus, they can be used for a variety of analyses, such as program understanding, validation, verification, and evolution

    An approach to improve accuracy in probabilistic models using state refinement

    Get PDF
    Probabilistic models are useful in the analysis of system be- haviour and non-functional properties. Reliable estimates and measurements of probabilities are needed to annotate behaviour models in order to generate accurate predictions. However, this may not be su cient, and may still lead to inaccurate results when the system model does not properly re ect the probabilistic choices made by the environment. Thus, not only should the probabilities be accurate in prop- erly re ecting reality, but also the model that is being used. In this paper we propose state re nement as a technique to mitigate this problem, showing that it is guaranteed to preserve or increase the accuracy of the initial model. We present a framework for iteratively improving the accuracy of a probabilistically annotated behaviour model with re- spect to a set of benchmark properties through iterative state re nements

    Adapting specifications for reactive controllers

    Get PDF
    For systems to respond to scenarios that were unforeseen at design time, they must be capable of safely adapting, at runtime, the assumptions they make about the environment, the goals they are expected to achieve, and the strategy that guarantees the goals are fulfilled if the assumptions hold. Such adaptation often involves the system degrading its functionality, by weakening its environment assumptions and/or the goals it aims to meet, ideally in a graceful manner. However, finding weaker assumptions that account for the unanticipated behaviour and of goals that are achievable in the new environment in a systematic and safe way remains an open challenge. In this paper, we propose a novel framework that supports assumption and, if necessary, goal degradation to allow systems to cope with runtime assumption violations. The framework, which integrates into the MORPH reference architecture, combines symbolic learning and reactive synthesis to compute implementable controllers that may be deployed safely. We describe and implement an algorithm that illustrates the working of this framework. We further demonstrate in our evaluation its effectiveness and applicability to a series of benchmarks from the literature. The results show that the algorithm successfully learns realizable specifications that accommodate previously violating environment behaviour in almost all cases. Exceptions are discussed in the evaluation

    Risk-driven revision of requirements models

    No full text
    © 2016 ACM.Requirements incompleteness is often the result of unanticipated adverse conditions which prevent the software and its environment from behaving as expected. These conditions represent risks that can cause severe software failures. The identification and resolution of such risks is therefore a crucial step towards requirements completeness. Obstacle analysis is a goal-driven form of risk analysis that aims at detecting missing conditions that can obstruct goals from being satisfied in a given domain, and resolving them. This paper proposes an approach for automatically revising goals that may be under-specified or (partially) wrong to resolve obstructions in a given domain. The approach deploys a learning-based revision methodology in which obstructed goals in a goal model are iteratively revised from traces exemplifying obstruction and non-obstruction occurrences. Our revision methodology computes domain-consistent, obstruction-free revisions that are automatically propagated to other goals in the model in order to preserve the correctness of goal models whilst guaranteeing minimal change to the original model. We present the formal foundations of our learning-based approach, and show that it preserves the properties of our formal framework. We validate it against the benchmarking case study of the London Ambulance Service

    Wearable HD-DOT for investigating functional connectivity in the adult brain: A single subject, multi-session study

    Get PDF
    We applied a wearable 24-module high-density diffuse optical tomography (HD-DOT) system in a resting state (RS) paradigm repeatedly in one subject. Seed-based correlation maps show large field-of-view RS functional connectivity

    Reliability and similarity of resting state functional connectivity networks imaged using wearable, high-density diffuse optical tomography in the home setting

    Get PDF
    Background: When characterizing the brain's resting state functional connectivity (RSFC) networks, demonstrating networks' similarity across sessions and reliability across different scan durations is essential for validating results and possibly minimizing the scanning time needed to obtain stable measures of RSFC. Recent advances in optical functional neuroimaging technologies have resulted in fully wearable devices that may serve as a complimentary tool to functional magnetic resonance imaging (fMRI) and allow for investigations of RSFC networks repeatedly and easily in non-traditional scanning environments. Methods: Resting-state cortical hemodynamic activity was repeatedly measured in a single individual in the home environment during COVID-19 lockdown conditions using the first ever application of a 24-module (72 sources, 96 detectors) wearable high-density diffuse optical tomography (HD-DOT) system. Twelve-minute recordings of resting-state data were acquired over the pre-frontal and occipital regions in fourteen experimental sessions over three weeks. As an initial validation of the data, spatial independent component analysis was used to identify RSFC networks. Reliability and similarity scores were computed using metrics adapted from the fMRI literature. Results: We observed RSFC networks over visual regions (visual peripheral, visual central networks) and higher-order association regions (control, salience and default mode network), consistent with previous fMRI literature. High similarity was observed across testing sessions and across chromophores (oxygenated and deoxygenated haemoglobin, HbO and HbR) for all functional networks, and for each network considered separately. Stable reliability values (described here as a <10% change between time windows) were obtained for HbO and HbR with differences in required scanning time observed on a network-by-network basis. Discussion: Using RSFC data from a highly sampled individual, the present work demonstrates that wearable HD-DOT can be used to obtain RSFC measurements with high similarity across imaging sessions and reliability across recording durations in the home environment. Wearable HD-DOT may serve as a complimentary tool to fMRI for studying RSFC networks outside of the traditional scanning environment and in vulnerable populations for whom fMRI is not feasible

    Fluent temporal logic for discrete-time event-based models

    Get PDF
    Fluent model checking is an automated technique for verifying that an event-based operational model satisfies some state-based declarative properties. The link between the event-based and state-based formalisms is defined through fluents which are state predicates whose value are determined by the occurrences of initiating and terminating events that make the fluents values become true or false, respectively. The existing fluent temporal logic is convenient for reasoning about untimed event-based models but difficult to use for timed models. The paper extends fluent temporal logic with temporal operators for modelling timed properties of discrete-time event-based models. It presents two approaches that differ on whether the properties model the system state after the occurrence of each event or at a fixed time rate. Model checking of timed properties is made possible by translating them into the existing untimed framework. Copyright 2005 ACM

    Assured and Correct Dynamic Update of Controllers

    Get PDF
    We present a general approach to specifying correctness criteria for dynamic update and a technique for automatically computing a controller that handles the transition from the old to the new specification, assuring that the system will reach a state in which such a transition can correctly occur. Indeed, using controller synthesis we show how to automatically build a controller that guarantees both progress towards update and safe update.Sociedad Argentina de Informática e Investigación Operativa (SADIO
    • …
    corecore