58 research outputs found

    Opacity with Orwellian Observers and Intransitive Non-interference

    Full text link
    Opacity is a general behavioural security scheme flexible enough to account for several specific properties. Some secret set of behaviors of a system is opaque if a passive attacker can never tell whether the observed behavior is a secret one or not. Instead of considering the case of static observability where the set of observable events is fixed off line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we consider Orwellian partial observability where unobservable events are not revealed unless a downgrading event occurs in the future of the trace. We show how to verify that some regular secret is opaque for a regular language L w.r.t. an Orwellian projection while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. We finally illustrate relevancy of our results by proving the equivalence between the opacity property of regular secrets w.r.t. Orwellian projection and the intransitive non-interference property

    Probabilistic Opacity for Markov Decision Processes

    Full text link
    Opacity is a generic security property, that has been defined on (non probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω\omega-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non opaque becomes decidable for a restricted class of ω\omega-regular secrets, as well as for all ω\omega-regular secrets under finite-memory schedulers

    Quantitative Analysis of Opacity in Cloud Computing Systems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Federated cloud systems increase the reliability and reduce the cost of the computational support. The resulting combination of secure private clouds and less secure public clouds, together with the fact that resources need to be located within different clouds, strongly affects the information flow security of the entire system. In this paper, the clouds as well as entities of a federated cloud system are assigned security levels, and a probabilistic flow sensitive security model for a federated cloud system is proposed. Then the notion of opacity --- a notion capturing the security of information flow --- of a cloud computing systems is introduced, and different variants of quantitative analysis of opacity are presented. As a result, one can track the information flow in a cloud system, and analyze the impact of different resource allocation strategies by quantifying the corresponding opacity characteristics

    Verification of Information Flow Properties under Rational Observation

    Get PDF
    Information flow properties express the capability for an agent to infer information about secret behaviours of a partially observable system. In a language-theoretic setting, where the system behaviour is described by a language, we define the class of rational information flow properties (RIFP), where observers are modeled by finite transducers, acting on languages in a given family L\mathcal{L}. This leads to a general decidability criterion for the verification problem of RIFPs on L\mathcal{L}, implying PSPACE-completeness for this problem on regular languages. We show that most trace-based information flow properties studied up to now are RIFPs, including those related to selective declassification and conditional anonymity. As a consequence, we retrieve several existing decidability results that were obtained by ad-hoc proofs.Comment: 19 pages, 7 figures, version extended from AVOCS'201

    Understanding and Enforcing Opacity

    Full text link
    Abstract—This paper puts a spotlight on the specification and enforcement of opacity, a security policy for protecting sensitive properties of system behavior. We illustrate the fine granularity of the opacity policy by location privacy and privacy-preserving aggregation scenarios. We present a frame-work for opacity and explore its key differences and formal connections with such well-known information-flow models as noninterference, knowledge-based security, and declassifica-tion. Our results are machine-checked and parameterized in the observational power of the attacker, including progress-insensitive, progress-sensitive, and timing-sensitive attackers. We present two approaches to enforcing opacity: a whitebox monitor and a blackbox sampling-based enforcement. We report on experiments with prototypes that utilize state-of-the-art Satisfiability Modulo Theories (SMT) solvers and the random testing tool QuickCheck to establish opacity for the location and aggregation-based scenarios. I
    • …
    corecore