223 research outputs found

    A new approach for diagnosability analysis of Petri nets using Verifier Nets

    Get PDF
    In this paper, we analyze the diagnosability properties of labeled Petri nets. We consider the standard notion of diagnosability of languages, requiring that every occurrence of an unobservable fault event be eventually detected, as well as the stronger notion of diagnosability in K steps, where the detection must occur within a fixed bound of K event occurrences after the fault. We give necessary and sufficient conditions for these two notions of diagnosability for both bounded and unbounded Petri nets and then present an algorithmic technique for testing the conditions based on linear programming. Our approach is novel and based on the analysis of the reachability/coverability graph of a special Petri net, called Verifier Net, that is built from the Petri net model of the given system. In the case of systems that are diagnosable in K steps, we give a procedure to compute the bound K. To the best of our knowledge, this is the first time that necessary and sufficient conditions for diagnosability and diagnosability in K steps of labeled unbounded Petri nets are presented

    A survey on efficient diagnosability tests for automata and bounded Petri nets

    Get PDF
    This paper presents a survey and evaluation of the efficiency of polynomial diagnosability algorithms for systems modeled by Petri nets and automata. A modified verification algorithm that reduces the state space by exploiting symmetry and abstracting unobservable transitions is also proposed. We show the importance of minimal explanations on the performance of diagnosability verifiers. Different verifiers are compared in terms of state space and elapsed time. It is shown that the minimal explanation notion involved in the modified basis reachability graph, a graph presented by Cabasino et al. [3] for diagnosability analysis of Petri nets, has great impact also on automata-based diagnosability methods. The evaluation often shows improved computation times of a factor 1000 or more when the concept of minimal explanation is included in the computation

    INCREMENTAL FAULT DIAGNOSABILITY AND SECURITY/PRIVACY VERIFICATION

    Get PDF
    Dynamical systems can be classified into two groups. One group is continuoustime systems that describe the physical system behavior, and therefore are typically modeled by differential equations. The other group is discrete event systems (DES)s that represent the sequential and logical behavior of a system. DESs are therefore modeled by discrete state/event models.DESs are widely used for formal verification and enforcement of desired behaviors in embedded systems. Such systems are naturally prone to faults, and the knowledge about each single fault is crucial from safety and economical point of view. Fault diagnosability verification, which is the ability to deduce about the occurrence of all failures, is one of the problems that is investigated in this thesis. Another verification problem that is addressed in this thesis is security/privacy. The two notions currentstate opacity and current-state anonymity that lie within this category, have attracted great attention in recent years, due to the progress of communication networks and mobile devices.Usually, DESs are modular and consist of interacting subsystems. The interaction is achieved by means of synchronous composition of these components. This synchronization results in large monolithic models of the total DES. Also, the complex computations, related to each specific verification problem, add even more computational complexity, resulting in the well-known state-space explosion problem.To circumvent the state-space explosion problem, one efficient approach is to exploit the modular structure of systems and apply incremental abstraction. In this thesis, a unified abstraction method that preserves temporal logic properties and possible silent loops is presented. The abstraction method is incrementally applied on the local subsystems, and it is proved that this abstraction preserves the main characteristics of the system that needs to be verified.The existence of shared unobservable events means that ordinary incremental abstraction does not work for security/privacy verification of modular DESs. To solve this problem, a combined incremental abstraction and observer generation is proposed and analyzed. Evaluations show the great impact of the proposed incremental abstraction on diagnosability and security/privacy verification, as well as verification of generic safety and liveness properties. Thus, this incremental strategy makes formal verification of large complex systems feasible

    Verification of diagnosability based on compositional branching bisimulation

    Get PDF
    This paper presents an efficient diagnosability verification technique, based on a general abstraction approach. We exploit branching bisimulation with explicit divergence (BBED), which preserves the temporal logic property that verifies diagnosability. Furthermore, using compositional abstraction for modular diagnosability verification offers additional state space reduction in comparison to the state-of-the-art techniques
    • …
    corecore