1,012,677 research outputs found
Model checking polygonal differential inclusions using invariance kernels
Polygonal hybrid systems are a subclass of planar hybrid
automata which can be represented by piecewise constant differential
inclusions. Here, we identify and compute an important object of such
systems’ phase portrait, namely invariance kernels. An invariant set is a
set of initial points of trajectories which keep rotating in a cycle forever
and the invariance kernel is the largest of such sets. We show that this
kernel is a non-convex polygon and we give a non-iterative algorithm for
computing the coordinates of its vertices and edges. Moreover, we present
a breadth-first search algorithm for solving the reachability problem for
such systems. Invariance kernels play an important role in the algorithm.peer-reviewe
Quantifying Information Leaks Using Reliability Analysis
acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4We report on our work-in-progress into the use of reliability analysis to quantify information leaks. In recent work we have proposed a software reliability analysis technique that uses symbolic execution and model counting to quantify the probability of reaching designated program states, e.g. assert violations, under uncertainty conditions in the environment. The technique has many applications beyond reliability analysis, ranging from program understanding and debugging to analysis of cyber-physical systems. In this paper we report on a novel application of the technique, namely Quantitative Information Flow analysis (QIF). The goal of QIF is to measure information leakage of a program by using information-theoretic metrics such as Shannon entropy or Renyi entropy. We exploit the model counting engine of the reliability analyzer over symbolic program paths, to compute an upper bound of the maximum leakage over all possible distributions of the confidential data. We have implemented our approach into a prototype tool, called QILURA, and explore its effectiveness on a number of case studie
Model checking usage policies
We study usage automata, a formal model for specifying policies on the usage of resources. Usage automata extend finite state automata with some additional features, parameters and guards, that improve their expressivity. We show that usage automata are expressive enough to model policies of real-world applications. We discuss their expressive power, and we prove that the problem of telling whether a computation complies with a usage policy is decidable. The main contribution of this paper is a model checking technique for usage automata. The model is that of usages, i.e. basic processes that describe the possible patterns of resource access and creation. In spite of the model having infinite states, because of recursion and resource creation, we devise a polynomial-time model checking technique for deciding when a usage complies with a usage policy
Model Checking Parse Trees
Parse trees are fundamental syntactic structures in both computational
linguistics and compilers construction. We argue in this paper that, in both
fields, there are good incentives for model-checking sets of parse trees for
some word according to a context-free grammar. We put forward the adequacy of
propositional dynamic logic (PDL) on trees in these applications, and study as
a sanity check the complexity of the corresponding model-checking problem:
although complete for exponential time in the general case, we find natural
restrictions on grammars for our applications and establish complexities
ranging from nondeterministic polynomial time to polynomial space in the
relevant cases.Comment: 21 + x page
Model checking embedded system designs
We survey the basic principles behind the application of model checking to controller verification and synthesis. A promising development is the area of guided model checking, in which the state space search strategy of the model checking algorithm can be influenced to visit more interesting sets of states first. In particular, we discuss how model checking can be combined with heuristic cost functions to guide search strategies. Finally, we list a number of current research developments, especially in the area of reachability analysis for optimal control and related issues
Model Checking Linear Logic Specifications
The overall goal of this paper is to investigate the theoretical foundations
of algorithmic verification techniques for first order linear logic
specifications. The fragment of linear logic we consider in this paper is based
on the linear logic programming language called LO enriched with universally
quantified goal formulas. Although LO was originally introduced as a
theoretical foundation for extensions of logic programming languages, it can
also be viewed as a very general language to specify a wide range of
infinite-state concurrent systems.
Our approach is based on the relation between backward reachability and
provability highlighted in our previous work on propositional LO programs.
Following this line of research, we define here a general framework for the
bottom-up evaluation of first order linear logic specifications. The evaluation
procedure is based on an effective fixpoint operator working on a symbolic
representation of infinite collections of first order linear logic formulas.
The theory of well quasi-orderings can be used to provide sufficient conditions
for the termination of the evaluation of non trivial fragments of first order
linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory
and Practice of Logic Programming
- …
