1,524 research outputs found
Transforming opacity verification to nonblocking verification in modular systems
We consider the verification of current-state and K-step opacity for systems
modeled as interacting non-deterministic finite-state automata. We describe a
new methodology for compositional opacity verification that employs
abstraction, in the form of a notion called opaque observation equivalence, and
that leverages existing compositional nonblocking verification algorithms. The
compositional approach is based on a transformation of the system, where the
transformed system is nonblocking if and only if the original one is
current-state opaque. Furthermore, we prove that -step opacity can also be
inferred if the transformed system is nonblocking. We provide experimental
results where current-state opacity is verified efficiently for a large
scaled-up system
Checking and Enforcing Security through Opacity in Healthcare Applications
The Internet of Things (IoT) is a paradigm that can tremendously
revolutionize health care thus benefiting both hospitals, doctors and patients.
In this context, protecting the IoT in health care against interference,
including service attacks and malwares, is challenging. Opacity is a
confidentiality property capturing a system's ability to keep a subset of its
behavior hidden from passive observers. In this work, we seek to introduce an
IoT-based heart attack detection system, that could be life-saving for patients
without risking their need for privacy through the verification and enforcement
of opacity. Our main contributions are the use of a tool to verify opacity in
three of its forms, so as to detect privacy leaks in our system. Furthermore,
we develop an efficient, Symbolic Observation Graph (SOG)-based algorithm for
enforcing opacity
Abstraction-Based Verification of Approximate Pre-Opacity for Control Systems
In this paper, we consider the problem of verifying pre-opacity for
discrete-time control systems. Pre-opacity is an important information-flow
security property that secures the intention of a system to execute some secret
behaviors in the future. Existing works on pre-opacity only consider non-metric
discrete systems, where it is assumed that intruders can distinguish different
output behaviors precisely. However, for continuous-space control systems whose
output sets are equipped with metrics (which is the case for most real-world
applications), it is too restrictive to assume precise measurements from
outside observers. In this paper, we first introduce a concept of approximate
pre-opacity by capturing the security level of control systems with respect to
the measurement precision of the intruder. Based on this new notion of
pre-opacity, we propose a verification approach for continuous-space control
systems by leveraging abstraction-based techniques. In particular, a new
concept of approximate pre-opacity preserving simulation relation is introduced
to characterize the distance between two systems in terms of preserving
pre-opacity. This new system relation allows us to verify pre-opacity of
complex continuous-space control systems using their finite abstractions. We
also present a method to construct pre-opacity preserving finite abstractions
for a class of discrete-time control systems under certain stability
assumptions.Comment: Discrete Event Systems, Opacity, Formal Abstraction
INCREMENTAL FAULT DIAGNOSABILITY AND SECURITY/PRIVACY VERIFICATION
Dynamical systems can be classified into two groups. One group is continuoustime systems that describe the physical system behavior, and therefore are typically modeled by differential equations. The other group is discrete event systems (DES)s that represent the sequential and logical behavior of a system. DESs are therefore modeled by discrete state/event models.DESs are widely used for formal verification and enforcement of desired behaviors in embedded systems. Such systems are naturally prone to faults, and the knowledge about each single fault is crucial from safety and economical point of view. Fault diagnosability verification, which is the ability to deduce about the occurrence of all failures, is one of the problems that is investigated in this thesis. Another verification problem that is addressed in this thesis is security/privacy. The two notions currentstate opacity and current-state anonymity that lie within this category, have attracted great attention in recent years, due to the progress of communication networks and mobile devices.Usually, DESs are modular and consist of interacting subsystems. The interaction is achieved by means of synchronous composition of these components. This synchronization results in large monolithic models of the total DES. Also, the complex computations, related to each specific verification problem, add even more computational complexity, resulting in the well-known state-space explosion problem.To circumvent the state-space explosion problem, one efficient approach is to exploit the modular structure of systems and apply incremental abstraction. In this thesis, a unified abstraction method that preserves temporal logic properties and possible silent loops is presented. The abstraction method is incrementally applied on the local subsystems, and it is proved that this abstraction preserves the main characteristics of the system that needs to be verified.The existence of shared unobservable events means that ordinary incremental abstraction does not work for security/privacy verification of modular DESs. To solve this problem, a combined incremental abstraction and observer generation is proposed and analyzed. Evaluations show the great impact of the proposed incremental abstraction on diagnosability and security/privacy verification, as well as verification of generic safety and liveness properties. Thus, this incremental strategy makes formal verification of large complex systems feasible
Verification and Enforcement of Strong State-Based Opacity for Discrete-Event Systems
In this paper, we investigate the verification and enforcement of strong
state-based opacity (SBO) in discrete-event systems modeled as
partially-observed (nondeterministic) finite-state automata, including strong
K-step opacity (K-SSO), strong current-state opacity (SCSO), strong
initial-state opacity (SISO), and strong infinite-step opacity (Inf-SSO). They
are stronger versions of four widely-studied standard opacity notions,
respectively. We firstly propose a new notion of K-SSO, and then we construct a
concurrent-composition structure that is a variant of our previously-proposed
one to verify it. Based on this structure, a verification algorithm for the
proposed notion of K-SSO is designed. Also, an upper bound on K in the proposed
K-SSO is derived. Secondly, we propose a distinctive opacity-enforcement
mechanism that has better scalability than the existing ones (such as
supervisory control). The basic philosophy of this new mechanism is choosing a
subset of controllable transitions to disable before an original system starts
to run in order to cut off all its runs that violate a notion of strong SBO of
interest. Accordingly, the algorithms for enforcing the above-mentioned four
notions of strong SBO are designed using the proposed two
concurrent-composition structures. In particular, the designed algorithm for
enforcing Inf-SSO has lower time complexity than the existing one in the
literature, and does not depend on any assumption. Finally, we illustrate the
applications of the designed algorithms using examples.Comment: 30 pages, 20 figures, partial results in Section 3 were presented at
IEEE Conference on Decision and Control, 2022. arXiv admin note: text overlap
with arXiv:2204.0469
- …