1,605 research outputs found

    INCREMENTAL FAULT DIAGNOSABILITY AND SECURITY/PRIVACY VERIFICATION

    Get PDF
    Dynamical systems can be classified into two groups. One group is continuoustime systems that describe the physical system behavior, and therefore are typically modeled by differential equations. The other group is discrete event systems (DES)s that represent the sequential and logical behavior of a system. DESs are therefore modeled by discrete state/event models.DESs are widely used for formal verification and enforcement of desired behaviors in embedded systems. Such systems are naturally prone to faults, and the knowledge about each single fault is crucial from safety and economical point of view. Fault diagnosability verification, which is the ability to deduce about the occurrence of all failures, is one of the problems that is investigated in this thesis. Another verification problem that is addressed in this thesis is security/privacy. The two notions currentstate opacity and current-state anonymity that lie within this category, have attracted great attention in recent years, due to the progress of communication networks and mobile devices.Usually, DESs are modular and consist of interacting subsystems. The interaction is achieved by means of synchronous composition of these components. This synchronization results in large monolithic models of the total DES. Also, the complex computations, related to each specific verification problem, add even more computational complexity, resulting in the well-known state-space explosion problem.To circumvent the state-space explosion problem, one efficient approach is to exploit the modular structure of systems and apply incremental abstraction. In this thesis, a unified abstraction method that preserves temporal logic properties and possible silent loops is presented. The abstraction method is incrementally applied on the local subsystems, and it is proved that this abstraction preserves the main characteristics of the system that needs to be verified.The existence of shared unobservable events means that ordinary incremental abstraction does not work for security/privacy verification of modular DESs. To solve this problem, a combined incremental abstraction and observer generation is proposed and analyzed. Evaluations show the great impact of the proposed incremental abstraction on diagnosability and security/privacy verification, as well as verification of generic safety and liveness properties. Thus, this incremental strategy makes formal verification of large complex systems feasible

    Probabilistic Opacity for Markov Decision Processes

    Full text link
    Opacity is a generic security property, that has been defined on (non probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω\omega-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non opaque becomes decidable for a restricted class of ω\omega-regular secrets, as well as for all ω\omega-regular secrets under finite-memory schedulers

    Verifying Opacity Properties in Security Systems

    Get PDF
    We delineate a methodology for the specification and verification of flow security properties expressible in the opacity framework. We propose a logic, opacTL, for straightforwardly expressing such properties in systems that can be modelled as partially observable labelled transition systems. We develop verification techniques for analysing property opacity with respect to observation notions. Adding a probabilistic operator to the specification language enables quantitative analysis and verification. This analysis is implemented as an extension to the PRISM model checker and illustrated via a number of examples. Finally, an alternative approach to quantifying the opacity property based on entropy is sketched

    Verifying Opacity Properties in Security Systems

    Get PDF
    We delineate a methodology for the specification and verification of flow security properties expressible in the opacity framework. We propose a logic, opacTL, for straightforwardly expressing such properties in systems that can be modelled as partially observable labelled transition systems. We develop verification techniques for analysing property opacity with respect to observation notions. Adding a probabilistic operator to the specification language enables quantitative analysis and verification. This analysis is implemented as an extension to the PRISM model checker and illustrated via a number of examples. Finally, an alternative approach to quantifying the opacity property based on entropy is sketched
    corecore