17 research outputs found
Efficient Synthesis of Robust Models for Stochastic Systems
We describe a tool-supported method for the efficient synthesis of parametric continuous-time Markov chains (pCTMC) that correspond to robust designs of a system under development. The pCTMCs generated by our RObust DEsign Synthesis (RODES) method are resilient to changes in the system’s operational profile, satisfy strict reliability, performance and other quality constraints, and are Pareto-optimal or nearly Pareto-optimal with respect to a set of quality optimisation criteria. By integrating sensitivity analysis at designer-specified tolerance levels and Pareto optimality, RODES produces designs that are potentially slightly suboptimal in return for less sensitivity—an acceptable trade-off in engineering practice. We demonstrate the effectiveness of our method and the efficiency of its GPU-accelerated tool support across multiple application domains by using RODES to design a producer-consumer system, a replicated file system and a workstation cluster system
Shepherding Hordes of Markov Chains
This paper considers large families of Markov chains (MCs) that are defined
over a set of parameters with finite discrete domains. Such families occur in
software product lines, planning under partial observability, and sketching of
probabilistic programs. Simple questions, like `does at least one family member
satisfy a property?', are NP-hard. We tackle two problems: distinguish family
members that satisfy a given quantitative property from those that do not, and
determine a family member that satisfies the property optimally, i.e., with the
highest probability or reward. We show that combining two well-known
techniques, MDP model checking and abstraction refinement, mitigates the
computational complexity. Experiments on a broad set of benchmarks show that in
many situations, our approach is able to handle families of millions of MCs,
providing superior scalability compared to existing solutions.Comment: Full version of TACAS'19 submissio
ExTrA: Explaining architectural design tradeoff spaces via dimensionality reduction
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to see the forest for the trees. In this paper we present ExTrA (Explaining Tradeoffs of software Architecture design spaces), an approach to analyzing architectural design spaces that addresses these limitations and provides a basis for explaining design tradeoffs. Our approach employs dimensionality reduction techniques employed in machine learning pipelines like Principal Component Analysis (PCA) and Decision Tree Learning (DTL) to enable architects to understand how design decisions contribute to the satisfaction of extra-functional properties across the design space. Our results show feasibility of the approach in two case studies and evidence that combining complementary techniques like PCA and DTL is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces
Interval Change-Point Detection for Runtime Probabilistic Model Checking
Recent probabilistic model checking techniques can verify reliability and performance properties of software systems affected by parametric uncertainty. This involves modelling the system behaviour using interval Markov chains, i.e., Markov models with transition probabilities or rates specified as intervals. These intervals can be updated continually using Bayesian estimators with imprecise priors, enabling the verification of the system properties of interest at runtime. However, Bayesian estimators are slow to react to sudden changes in the actual value of the estimated parameters, yielding inaccurate intervals and leading to poor verification results after such changes. To address this limitation, we introduce an efficient interval change-point detection method, and we integrate it with a state-of-the-art Bayesian estimator with imprecise priors. Our experimental results show that the resulting end-to-end Bayesian approach to change-point detection and estimation of interval Markov chain parameters handles effectively a wide range of sudden changes in parameter values, and supports runtime probabilistic model checking under parametric uncertainty
Socio-Cyber-Physical Systems: Models, Opportunities, Open Challenges
Almost without exception, cyber-physical systems operate alongside, for the benefit of, and supported by humans. Unsurprisingly, disregarding their social aspects during development and operation renders these systems ineffective. In this paper, we explore approaches to modelling and reasoning about the human involvement in socio-cyber-physical systems (SCPS). To provide an unbiased perspective, we describe both the opportunities afforded by the presence of human agents, and the challenges associated with ensuring that their modelling is sufficiently accurate to support decision making during SCPS development and, if applicable, at run-time. Using SCPS examples from emergency management and assisted living, we illustrate how recent advances in stochastic modelling, analysis and synthesis can be used to exploit human observations about the impact of natural and man-made disasters, and to support the efficient provision of assistive care