2,232 research outputs found

    Checking and Enforcing Security through Opacity in Healthcare Applications

    Full text link
    The Internet of Things (IoT) is a paradigm that can tremendously revolutionize health care thus benefiting both hospitals, doctors and patients. In this context, protecting the IoT in health care against interference, including service attacks and malwares, is challenging. Opacity is a confidentiality property capturing a system's ability to keep a subset of its behavior hidden from passive observers. In this work, we seek to introduce an IoT-based heart attack detection system, that could be life-saving for patients without risking their need for privacy through the verification and enforcement of opacity. Our main contributions are the use of a tool to verify opacity in three of its forms, so as to detect privacy leaks in our system. Furthermore, we develop an efficient, Symbolic Observation Graph (SOG)-based algorithm for enforcing opacity

    Verification of Information Flow Properties under Rational Observation

    Get PDF
    Information flow properties express the capability for an agent to infer information about secret behaviours of a partially observable system. In a language-theoretic setting, where the system behaviour is described by a language, we define the class of rational information flow properties (RIFP), where observers are modeled by finite transducers, acting on languages in a given family L\mathcal{L}. This leads to a general decidability criterion for the verification problem of RIFPs on L\mathcal{L}, implying PSPACE-completeness for this problem on regular languages. We show that most trace-based information flow properties studied up to now are RIFPs, including those related to selective declassification and conditional anonymity. As a consequence, we retrieve several existing decidability results that were obtained by ad-hoc proofs.Comment: 19 pages, 7 figures, version extended from AVOCS'201

    Opacity with Orwellian Observers and Intransitive Non-interference

    Full text link
    Opacity is a general behavioural security scheme flexible enough to account for several specific properties. Some secret set of behaviors of a system is opaque if a passive attacker can never tell whether the observed behavior is a secret one or not. Instead of considering the case of static observability where the set of observable events is fixed off line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we consider Orwellian partial observability where unobservable events are not revealed unless a downgrading event occurs in the future of the trace. We show how to verify that some regular secret is opaque for a regular language L w.r.t. an Orwellian projection while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. We finally illustrate relevancy of our results by proving the equivalence between the opacity property of regular secrets w.r.t. Orwellian projection and the intransitive non-interference property

    Verifying Weak and Strong kk-Step Opacity in Discrete-Event Systems

    Full text link
    Opacity is an important system-theoretic property expressing whether a system may reveal its secret to a passive observer (an intruder) who knows the structure of the system but has only limited observations of its behavior. Several notions of opacity have been discussed in the literature, including current-state opacity, kk-step opacity, and infinite-step opacity. We investigate weak and strong kk-step opacity, notions that generalize both current-state opacity and infinite-step opacity, and ask whether the intruder is not able to decide, at any instant, when respectively whether the system was in a secret state during the last kk observable steps. We design a new algorithm verifying weak kk-step opacity, the complexity of which is lower than the complexity of existing algorithms and does not depend on the parameter kk, and show how to use it to verify strong kk-step opacity by reducing strong kk-step opacity to weak kk-step opacity. The complexity of the resulting algorithm is again better than the complexity of existing algorithms and does not depend on the parameter kk

    Probabilistic Opacity for Markov Decision Processes

    Full text link
    Opacity is a generic security property, that has been defined on (non probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω\omega-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non opaque becomes decidable for a restricted class of ω\omega-regular secrets, as well as for all ω\omega-regular secrets under finite-memory schedulers

    Verification and Enforcement of Strong State-Based Opacity for Discrete-Event Systems

    Full text link
    In this paper, we investigate the verification and enforcement of strong state-based opacity (SBO) in discrete-event systems modeled as partially-observed (nondeterministic) finite-state automata, including strong K-step opacity (K-SSO), strong current-state opacity (SCSO), strong initial-state opacity (SISO), and strong infinite-step opacity (Inf-SSO). They are stronger versions of four widely-studied standard opacity notions, respectively. We firstly propose a new notion of K-SSO, and then we construct a concurrent-composition structure that is a variant of our previously-proposed one to verify it. Based on this structure, a verification algorithm for the proposed notion of K-SSO is designed. Also, an upper bound on K in the proposed K-SSO is derived. Secondly, we propose a distinctive opacity-enforcement mechanism that has better scalability than the existing ones (such as supervisory control). The basic philosophy of this new mechanism is choosing a subset of controllable transitions to disable before an original system starts to run in order to cut off all its runs that violate a notion of strong SBO of interest. Accordingly, the algorithms for enforcing the above-mentioned four notions of strong SBO are designed using the proposed two concurrent-composition structures. In particular, the designed algorithm for enforcing Inf-SSO has lower time complexity than the existing one in the literature, and does not depend on any assumption. Finally, we illustrate the applications of the designed algorithms using examples.Comment: 30 pages, 20 figures, partial results in Section 3 were presented at IEEE Conference on Decision and Control, 2022. arXiv admin note: text overlap with arXiv:2204.0469

    INCREMENTAL FAULT DIAGNOSABILITY AND SECURITY/PRIVACY VERIFICATION

    Get PDF
    Dynamical systems can be classified into two groups. One group is continuoustime systems that describe the physical system behavior, and therefore are typically modeled by differential equations. The other group is discrete event systems (DES)s that represent the sequential and logical behavior of a system. DESs are therefore modeled by discrete state/event models.DESs are widely used for formal verification and enforcement of desired behaviors in embedded systems. Such systems are naturally prone to faults, and the knowledge about each single fault is crucial from safety and economical point of view. Fault diagnosability verification, which is the ability to deduce about the occurrence of all failures, is one of the problems that is investigated in this thesis. Another verification problem that is addressed in this thesis is security/privacy. The two notions currentstate opacity and current-state anonymity that lie within this category, have attracted great attention in recent years, due to the progress of communication networks and mobile devices.Usually, DESs are modular and consist of interacting subsystems. The interaction is achieved by means of synchronous composition of these components. This synchronization results in large monolithic models of the total DES. Also, the complex computations, related to each specific verification problem, add even more computational complexity, resulting in the well-known state-space explosion problem.To circumvent the state-space explosion problem, one efficient approach is to exploit the modular structure of systems and apply incremental abstraction. In this thesis, a unified abstraction method that preserves temporal logic properties and possible silent loops is presented. The abstraction method is incrementally applied on the local subsystems, and it is proved that this abstraction preserves the main characteristics of the system that needs to be verified.The existence of shared unobservable events means that ordinary incremental abstraction does not work for security/privacy verification of modular DESs. To solve this problem, a combined incremental abstraction and observer generation is proposed and analyzed. Evaluations show the great impact of the proposed incremental abstraction on diagnosability and security/privacy verification, as well as verification of generic safety and liveness properties. Thus, this incremental strategy makes formal verification of large complex systems feasible

    Enforcing current-state opacity through shuffle in event observations

    Get PDF
    Opacity is a property that ensures that a secret behavior of the system is kept hidden from an Intruder. In this work, we deal with current-state opacity, and propose an Opacity-Enforcer that is able to change, in an appropriate way, the order of observation in the event occurrences in the system, so as to mislead the Intruder to always wrongly estimate at least one non-secret state. A necessary and sufficient condition for the feasibility of the Opacity-Enforcer synthesis is presented and also two algorithms to build the automaton that realizes such an enforcement.Opacidade é uma propriedade que garante que qualquer comportamento secreto do sistema permaneça escondido de um Intruso. Neste trabalho será considerado o problema da opacidade de estado atual e será proposto um Forçador de Opacidade capaz de permutar adequadamente a ordem de observação dos eventos ocorridos no sistema, de tal forma que o Intruso seja enganado e sempre estime, erroneamente, pelo menos um estado não secreto. Condições necessárias e suficientes para a síntese do Forçador de Opacidade são propostas a fim de que a mesma seja factível e são também apresentados dois algoritmos para construção do autômato que implementa a estratégia usada pelo Forçador de Opacidade
    • …
    corecore