12,017 research outputs found

    Stochastic modeling, analysis and verification of mission-critical systems and processes

    Get PDF
    Software and business processes used in mission-critical defence applications are often characterised by stochastic behaviour. The causes for this behaviour range from unanticipated environmental changes and built-in random delays to component and communication protocol unreliability. This paper overviews the use of a stochastic modelling and analysis technique called quantitative verication to establish whether mission-critical software and business processes meet their reliability, performance and other quality-of-service requirements

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Lightweight adaptive filtering for efficient learning and updating of probabilistic models

    No full text
    Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having low overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other. In this paper we propose a lightweight adaptive filter to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal stability, unbiasedness and consistency assessment of the learning approach is provided, as well as an experimental comparison with state-of-the-art alternatives

    Advanced Probabilistic Couplings for Differential Privacy

    Get PDF
    Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called \emph{optimal subset coupling} that may be of independent interest

    Investigating biocomplexity through the agent-based paradigm.

    Get PDF
    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines--or agents--to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex

    Online Model-Based Testing under Uncertainty

    Get PDF
    Modern software systems are required to operate in a highly uncertain and changing environment. They have to control the satisfaction of their requirements at run-time, and possibly adapt and cope with situations that have not been completely addressed at design-time. Software engineering methods and techniques are, more than ever, forced to deal with change and uncertainty (lack of knowledge) explicitly. For tackling the challenge posed by uncertainty in delivering more reliable systems, this paper proposes a novel online Model-based Testing technique that complements classic test case generation based on pseudo-random sampling strategies with an uncertainty-aware sampling strategy. To deal with system uncertainty during testing, the proposed strategy builds on an Inverse Uncertainty Quantification approach that is related to the discrepancy between the measured data at run-time (while the system executes) and a Markov Decision Process model describing the behavior of the system under test. To this purpose, a conformance game approach is adopted in which tests feed a Bayesian inference calibrator that continuously learns from test data to tune the system model and the system itself. A comparative evaluation between the proposed uncertainty-aware sampling policy and classical pseudo-random sampling policies is also presented using the Tele Assistance System running example, showing the differences in achieved accuracy and efficiency

    Online Markov Chain Learning for Quality of Service Engineering in Adaptive Computer Systems

    Get PDF
    Computer systems are increasingly used in applications where the consequences of failure vary from financial loss to loss of human life. As a result, significant research has focused on the model-based analysis and verification of the compliance of business-critical and security-critical computer systems with their requirements. Many of the formalisms proposed by this research target the analysis of quality-of-service (QoS) computer system properties such as reliability, performance and cost. However, the effectiveness of such analysis or verification depends on the accuracy of the QoS models they rely upon. Building accurate mathematical models for critical computer systems is a great challenge. This is particularly true for systems used in applications affected by frequent changes in workload, requirements and environment. In these scenarios, QoS models become obsolete unless they are continually updated to reflect the evolving behaviour of the analysed systems. This thesis introduces new techniques for learning the parameters and the structure of discrete-time Markov chains, a class of models that is widely used to establish key reliability, performance and other QoS properties of real-world systems. The new learning techniques use as input run-time observations of system events associated with costs/rewards and transitions between the states of a model. When the model structure is known, they continually update its state transition probabilities and costs/rewards in line with the observed variations in the behaviour of the system. In scenarios when the model structure is unknown, a Markov chain is synthesised from sequences of such observations. The two categories of learning techniques underpin the operation of a new toolset for the engineering of self-adaptive service-based systems, which was developed as part of this research. The thesis introduces this software engineering toolset, and demonstrates its effectiveness in a case study that involves the development of a prototype telehealth service-based system capable of continual self-verification
    • …
    corecore