3,127 research outputs found

    Compositionality Results for Quantitative Information Flow

    Get PDF
    International audienceIn the min-entropy approach to quantitative information flow, the leakage is defined in terms of a minimization problem, which, in case of large systems, can be computationally rather heavy. The same happens for the recently proposed generalization called g-vulnerability. In this paper we study the case in which the channel associated to the system can be decomposed into simpler channels, which typically happens when the observables consist of several components. Our main contribution is the derivation of bounds on the g-leakage of the whole system in terms of the g-leakage of its components

    A Compositional Semantics for Stochastic Reo Connectors

    Full text link
    In this paper we present a compositional semantics for the channel-based coordination language Reo which enables the analysis of quality of service (QoS) properties of service compositions. For this purpose, we annotate Reo channels with stochastic delay rates and explicitly model data-arrival rates at the boundary of a connector, to capture its interaction with the services that comprise its environment. We propose Stochastic Reo automata as an extension of Reo automata, in order to compositionally derive a QoS-aware semantics for Reo. We further present a translation of Stochastic Reo automata to Continuous-Time Markov Chains (CTMCs). This translation enables us to use third-party CTMC verification tools to do an end-to-end performance analysis of service compositions.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499

    Hidden-Markov Program Algebra with iteration

    Full text link
    We use Hidden Markov Models to motivate a quantitative compositional semantics for noninterference-based security with iteration, including a refinement- or "implements" relation that compares two programs with respect to their information leakage; and we propose a program algebra for source-level reasoning about such programs, in particular as a means of establishing that an "implementation" program leaks no more than its "specification" program. This joins two themes: we extend our earlier work, having iteration but only qualitative, by making it quantitative; and we extend our earlier quantitative work by including iteration. We advocate stepwise refinement and source-level program algebra, both as conceptual reasoning tools and as targets for automated assistance. A selection of algebraic laws is given to support this view in the case of quantitative noninterference; and it is demonstrated on a simple iterated password-guessing attack

    Towards Trace Metrics via Functor Lifting

    Get PDF
    We investigate the possibility of deriving metric trace semantics in a coalgebraic framework. First, we generalize a technique for systematically lifting functors from the category Set of sets to the category PMet of pseudometric spaces, showing under which conditions also natural transformations, monads and distributive laws can be lifted. By exploiting some recent work on an abstract determinization, these results enable the derivation of trace metrics starting from coalgebras in Set. More precisely, for a coalgebra on Set we determinize it, thus obtaining a coalgebra in the Eilenberg-Moore category of a monad. When the monad can be lifted to PMet, we can equip the final coalgebra with a behavioral distance. The trace distance between two states of the original coalgebra is the distance between their images in the determinized coalgebra through the unit of the monad. We show how our framework applies to nondeterministic automata and probabilistic automata

    Compositional closure for Bayes Risk in probabilistic noninterference

    Full text link
    We give a sequential model for noninterference security including probability (but not demonic choice), thus supporting reasoning about the likelihood that high-security values might be revealed by observations of low-security activity. Our novel methodological contribution is the definition of a refinement order and its use to compare security measures between specifications and (their supposed) implementations. This contrasts with the more common practice of evaluating the security of individual programs in isolation. The appropriateness of our model and order is supported by our showing that our refinement order is the greatest compositional relation --the compositional closure-- with respect to our semantics and an "elementary" order based on Bayes Risk --- a security measure already in widespread use. We also relate refinement to other measures such as Shannon Entropy. By applying the approach to a non-trivial example, the anonymous-majority Three-Judges protocol, we demonstrate by example that correctness arguments can be simplified by the sort of layered developments --through levels of increasing detail-- that are allowed and encouraged by compositional semantics

    BlenX-based compositional modeling of complex reaction mechanisms

    Full text link
    Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality

    Experimental Evaluation of Cache-Related Preemption Delay Aware Timing Analysis

    Get PDF
    In the presence of caches, preemptive scheduling may incur a significant overhead referred to as cache-related preemption delay (CRPD). CRPD is caused by preempting tasks evicting cached memory blocks of preempted tasks, which have to be reloaded when the preempted tasks resume their execution. In this paper we experimentally evaluate state-of-the-art techniques to account for the CRPD during timing analysis. We find that purely synthetically-generated task sets may yield misleading conclusions regarding the relative precision of different CRPD analysis techniques and the impact of CRPD on schedulability in general. Based on task characterizations obtained by static worst-case execution time (WCET) analysis, we shed new light on the state of the art
    corecore