23 research outputs found
On Relaxing Metric Information in Linear Temporal Logic
Metric LTL formulas rely on the next operator to encode time distances,
whereas qualitative LTL formulas use only the until operator. This paper shows
how to transform any metric LTL formula M into a qualitative formula Q, such
that Q is satisfiable if and only if M is satisfiable over words with
variability bounded with respect to the largest distances used in M (i.e.,
occurrences of next), but the size of Q is independent of such distances.
Besides the theoretical interest, this result can help simplify the
verification of systems with time-granularity heterogeneity, where large
distances are required to express the coarse-grain dynamics in terms of
fine-grain time units.Comment: Minor change
On Relaxing Metric Information in Linear Temporal Logic
This paper studies the equi-satisfiability of metric linear temporal logic (LTL) and its qualitative subset. Metric LTL formulas rely on the next operator to encode distances, whereas qualitative LTL formulas use only the until modality. The paper shows how to transform any metric LTL formula M into a qualitative one Q, such that Q and M are equi-satisfiable over words with variability bounded with respect to the largest distances used in M (i.e., occurrences of next), but the size of Q is independent of such distances. Besides the theoretical interest, these results may help simplify the verification of systems with time-granularity heterogeneity, where large distances are required to express the coarse-grain dynamic
Robust Linear Temporal Logic
Although it is widely accepted that every system should be robust, in the
sense that "small" violations of environment assumptions should lead to "small"
violations of system guarantees, it is less clear how to make this intuitive
notion of robustness mathematically precise. In this paper, we address this
problem by developing a robust version of Linear Temporal Logic (LTL), which we
call robust LTL and denote by rLTL. Formulas in rLTL are syntactically
identical to LTL formulas but are endowed with a many-valued semantics that
encodes robustness. In particular, the semantics of the rLTL formula is such that a "small" violation of the environment
assumption is guaranteed to only produce a "small" violation of the
system guarantee . In addition to introducing rLTL, we study the
verification and synthesis problems for this logic: similarly to LTL, we show
that both problems are decidable, that the verification problem can be solved
in time exponential in the number of subformulas of the rLTL formula at hand,
and that the synthesis problem can be solved in doubly exponential time
Improving explicit model checking for Petri nets
Model checking is the automated verification that systematically checks if a given behavioral property holds for a given model of a system. We use Petri nets and temporal logic as formalisms to describe a system and its behavior in a mathematically precise and unambiguous manner. The contributions of this thesis are concerned with the improvement of model checking efficiency both in theory and in practice. We present two new reduction techniques and several supplementary strength reduction techniques. The thesis also enhances partial order reduction for certain temporal logic classes
Being correct is not enough: efficient verification using robust linear temporal logic
While most approaches in formal methods address system correctness, ensuring
robustness has remained a challenge. In this paper we present and study the
logic rLTL which provides a means to formally reason about both correctness and
robustness in system design. Furthermore, we identify a large fragment of rLTL
for which the verification problem can be efficiently solved, i.e.,
verification can be done by using an automaton, recognizing the behaviors
described by the rLTL formula , of size at most , where is the length of . This
result improves upon the previously known bound of
for rLTL verification and is closer to
the LTL bound of . The usefulness of
this fragment is demonstrated by a number of case studies showing its practical
significance in terms of expressiveness, the ability to describe robustness,
and the fine-grained information that rLTL brings to the process of system
verification. Moreover, these advantages come at a low computational overhead
with respect to LTL verification.Comment: arXiv admin note: text overlap with arXiv:1510.08970. v2 notes: Proof
on the complexity of translating rLTL formulae to LTL formulae via the
rewriting approach. New case study on the scalability of rLTL formulae in the
proposed fragment. Accepted to appear in ACM Transactions on Computational
Logi
Formal Methods for Autonomous Systems
Formal methods refer to rigorous, mathematical approaches to system
development and have played a key role in establishing the correctness of
safety-critical systems. The main building blocks of formal methods are models
and specifications, which are analogous to behaviors and requirements in system
design and give us the means to verify and synthesize system behaviors with
formal guarantees.
This monograph provides a survey of the current state of the art on
applications of formal methods in the autonomous systems domain. We consider
correct-by-construction synthesis under various formulations, including closed
systems, reactive, and probabilistic settings. Beyond synthesizing systems in
known environments, we address the concept of uncertainty and bound the
behavior of systems that employ learning using formal methods. Further, we
examine the synthesis of systems with monitoring, a mitigation technique for
ensuring that once a system deviates from expected behavior, it knows a way of
returning to normalcy. We also show how to overcome some limitations of formal
methods themselves with learning. We conclude with future directions for formal
methods in reinforcement learning, uncertainty, privacy, explainability of
formal methods, and regulation and certification
Recommended from our members
Abstractions and optimisations for model-checking software-defined networks
Software-Defined Networking introduces a new programmatic abstraction layer by shifting the distributed network functions (NFs) from silicon chips (ASICs) to a logically centralized (controller) program. And yet, controller programs are a common source of bugs that can cause performance degradation, security exploits and poor reliability in networks. Assuring that a controller program satisfies the specifications is thus most preferable, yet the size of the network and the complexity of the controller makes this a challenging effort.
This thesis presents a highly expressive, optimised SDN model, (code-named MoCS), that can be reasoned about and verified formally in an acceptable timeframe. In it, we introduce reusable abstractions that (i) come with a rich semantics, for capturing subtle real-world bugs that are hard to track down, and (ii) which are formally proved correct. In addition, MoCS deals with timeouts of flow table entries, thus supporting automatic state refresh (soft state) in the network. The optimisations are achieved by (1) contextually analysing the model for possible partial order reductions in view of the concrete control program, network topology and specification property in question, (2) pre-computing packet equivalence classes and (3) indexing packets and rules that exist in the model and bit-packing (compressing) them.
Each of these developments is demonstrated by a set of real-world controller programs that have been implemented in network topologies of varying size, and publicly released under an open-source license
Towards the Correctness of Software Behavior in UML: A Model Checking Approach Based on Slicing
Embedded systems are systems which have ongoing interactions with their environments, accepting requests and producing responses. Such systems are increasingly used in applications where failure is unacceptable: traffic control systems, avionics, automobiles, etc. Correct and highly dependable construction of such systems is particularly important and challenging. A very promising and increasingly attractive method for achieving this goal is using the approach of formal verification. A formal verification method consists of three major components: a model for describing the behavior of the system, a specification language to embody correctness requirements, and an analysis method to verify the behavior against the correctness requirements. This Ph.D. addresses the correctness of the behavioral design of embedded systems, using model checking as the verification technology. More precisely, we present an UML-based verification method that checks whether the conditions on the evolution of the embedded system are met by the model. Unfortunately, model checking is limited to medium size systems because of its high space requirements. To overcome this problem, this Ph.D. suggests the integration of the slicing (reduction) technique