380 research outputs found
Parameterized Synthesis
We study the synthesis problem for distributed architectures with a
parametric number of finite-state components. Parameterized specifications
arise naturally in a synthesis setting, but thus far it was unclear how to
detect realizability and how to perform synthesis in a parameterized setting.
Using a classical result from verification, we show that for a class of
specifications in indexed LTL\X, parameterized synthesis in token ring networks
is equivalent to distributed synthesis in a network consisting of a few copies
of a single process. Adapting a well-known result from distributed synthesis,
we show that the latter problem is undecidable. We describe a semi-decision
procedure for the parameterized synthesis problem in token rings, based on
bounded synthesis. We extend the approach to parameterized synthesis in
token-passing networks with arbitrary topologies, and show applicability on a
simple case study. Finally, we sketch a general framework for parameterized
synthesis based on cutoffs and other parameterized verification techniques.Comment: Extended version of TACAS 2012 paper, 29 page
Ranking and Repulsing Supermartingales for Reachability in Probabilistic Programs
Computing reachability probabilities is a fundamental problem in the analysis
of probabilistic programs. This paper aims at a comprehensive and comparative
account on various martingale-based methods for over- and under-approximating
reachability probabilities. Based on the existing works that stretch across
different communities (formal verification, control theory, etc.), we offer a
unifying account. In particular, we emphasize the role of order-theoretic fixed
points---a classic topic in computer science---in the analysis of probabilistic
programs. This leads us to two new martingale-based techniques, too. We give
rigorous proofs for their soundness and completeness. We also make an
experimental comparison using our implementation of template-based synthesis
algorithms for those martingales
Quantitative reactive modeling and verification
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments
Conservative Safety Monitors of Stochastic Dynamical Systems
Generating accurate runtime safety estimates for autonomous systems is vital
to ensuring their continued proliferation. However, exhaustive reasoning about
future behaviors is generally too complex to do at runtime. To provide scalable
and formal safety estimates, we propose a method for leveraging design-time
model checking results at runtime. Specifically, we model the system as a
probabilistic automaton (PA) and compute bounded-time reachability
probabilities over the states of the PA at design time. At runtime, we combine
distributions of state estimates with the model checking results to produce a
bounded time safety estimate. We argue that our approach produces
well-calibrated safety probabilities, assuming the estimated state
distributions are well-calibrated. We evaluate our approach on simulated water
tanks
LNCS
We present an algorithmic method for the quantitative, performance-aware synthesis of concurrent programs. The input consists of a nondeterministic partial program and of a parametric performance model. The nondeterminism allows the programmer to omit which (if any) synchronization construct is used at a particular program location. The performance model, specified as a weighted automaton, can capture system architectures by assigning different costs to actions such as locking, context switching, and memory and cache accesses. The quantitative synthesis problem is to automatically resolve the nondeterminism of the partial program so that both correctness is guaranteed and performance is optimal. As is standard for shared memory concurrency, correctness is formalized "specification free", in particular as race freedom or deadlock freedom. For worst-case (average-case) performance, we show that the problem can be reduced to 2-player graph games (with probabilistic transitions) with quantitative objectives. While we show, using game-theoretic methods, that the synthesis problem is Nexp-complete, we present an algorithmic method and an implementation that works efficiently for concurrent programs and performance models of practical interest. We have implemented a prototype tool and used it to synthesize finite-state concurrent programs that exhibit different programming patterns, for several performance models representing different architectures
Virtual Node - To Achieve Temporal Isolation and Predictable Integration of Real-Time Components
We present an approach of two-level deployment process for component models used in distributed real-time embedded systems to achieve predictable integration of real-time components. Our main emphasis is on the new concept of virtual node with the use of a hierarchical scheduling technique. Virtual nodes are used as means to achieve predictable integration of software components with real-time requirements. The hierarchical scheduling framework is used to achieve temporal isolation between components (or sets of components). Our approach permits detailed analysis, e.g., with respect to timing, of virtual nodes and this analysis is also reusable with the reuse of virtual nodes. Hence virtual node preserves real-time properties across reuse and integration in different contexts
Deductive Controller Synthesis for Probabilistic Hyperproperties
Probabilistic hyperproperties specify quantitative relations between the
probabilities of reaching different target sets of states from different
initial sets of states. This class of behavioral properties is suitable for
capturing important security, privacy, and system-level requirements. We
propose a new approach to solve the controller synthesis problem for Markov
decision processes (MDPs) and probabilistic hyperproperties. Our specification
language builds on top of the logic HyperPCTL and enhances it with structural
constraints over the synthesized controllers. Our approach starts from a family
of controllers represented symbolically and defined over the same copy of an
MDP. We then introduce an abstraction refinement strategy that can relate
multiple computation trees and that we employ to prune the search space
deductively. The experimental evaluation demonstrates that the proposed
approach considerably outperforms HyperProb, a state-of-the-art SMT-based model
checking tool for HyperPCTL. Moreover, our approach is the first one that is
able to effectively combine probabilistic hyperproperties with additional
intra-controller constraints (e.g. partial observability) as well as
inter-controller constraints (e.g. agreements on a common action)
- …