126 research outputs found

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Full text link
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    Algebraic Reasoning for Probabilistic Action Systems and While-Loops

    Get PDF
    Back and von Wright have developed algebraic laws for reasoning about loops in the refinement calculus. We extend their work to reasoning about probabilistic loops in the probabilistic refinement calculus. We apply our algebraic reasoning to derive transformation rules for probabilistic action systems and probabilistic while-loops. In particular we focus on developing data refinement rules for these two constructs. Our extension is interesting since some well known transformation rules that are applicable to standard programs are not applicable to probabilistic ones: we identify some of these important differences and we develop alternative rules where possible. In particular, our probabilistic action system and while-loop data refinement rules are new: they differ from the non-probabilistic rules

    Refinement algebra for probabilistic programs

    Get PDF
    We identify a refinement algebra for reasoning about probabilistic program transformations in a total-correctness setting. The algebra is equipped with operators that determine whether a program is enabled or terminates respectively. As well as developing the basic theory of the algebra we demonstrate how it may be used to explain key differences and similarities between standard (i.e. non-probabilistic) and probabilistic programs and verify important transformation theorems for probabilistic action systems.29 page(s

    Deriving Laws for Developing Concurrent Programs in a Rely-Guarantee Style

    Full text link
    In this paper we present a theory for the refinement of shared-memory concurrent algorithms from specifications. Our approach avoids restrictive atomicity contraints. It provides a range of constructs for specifying concurrent programs and laws for refining these to code. We augment pre and post condition specifications with Jones' rely and guarantee conditions, which we encode as commands within a wide-spectrum language. Program components are specified using either partial and total correctness versions of end-to-end specifications. Operations on shared data structures and atomic machine operations (e.g. compare-and-swap) are specified using an atomic specification command. All the above constructs are defined in terms of a simple core language, based on four primitive commands and a handful of operators, and for which we have developed an extensive algebraic theory in Isabelle/HOL. For shared memory programs, expression evaluation is subject to fine-grained interference and we have avoided atomicity restrictions other than for read and write of primitive types (words). Expression evaluation and assignment commands are also defined in terms of our core language primitives, allowing laws for reasoning about them to be proven in the theory. Control structures such as conditionals, recursion and loops are all defined in terms of the core language. In developing the laws for refining to such structures from specifications we have taken care to develop laws that are as general as possible; our laws are typically more general than those found in the literature. In developing our concurrent refinement theory we have taken care to focus on the algebraic properties of our commands and operators, which has allowed us to reuse algebraic theories, including well-known theories, such as lattices and boolean algebra, as well as programming-specific algebras, such as our synchronous algebra

    Trace models of concurrent valuation algebras

    Full text link
    This paper introduces Concurrent Valuation Algebras (CVAs), extending ordered valuation algebras (OVAs) by incorporating two combine operators representing parallel and sequential products that adhere to a weak exchange law. CVAs present significant theoretical and practical advantages for specifying and modelling concurrent and distributed systems. As a presheaf on a space of domains, a CVA facilitates localised specifications, promoting modularity, compositionality, and the capability to represent large and complex systems. Moreover, CVAs facilitate lattice-based refinement reasoning, and are compatible with standard methodologies such as Hoare and Rely-Guarantee logics. We demonstrate the flexibility of CVAs through three trace models that represent distinct paradigms of concurrent/distributed computing, and interrelate them via morphisms. We also discuss the potential for importing a powerful local computation framework from valuation algebras for the model checking of concurrent and distributed systems.Comment: 25 page

    Word correlation matrices for protein sequence analysis and remote homology detection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Classification of protein sequences is a central problem in computational biology. Currently, among computational methods discriminative kernel-based approaches provide the most accurate results. However, kernel-based methods often lack an interpretable model for analysis of discriminative sequence features, and predictions on new sequences usually are computationally expensive.</p> <p>Results</p> <p>In this work we present a novel kernel for protein sequences based on average word similarity between two sequences. We show that this kernel gives rise to a feature space that allows analysis of discriminative features and fast classification of new sequences. We demonstrate the performance of our approach on a widely-used benchmark setup for protein remote homology detection.</p> <p>Conclusion</p> <p>Our word correlation approach provides highly competitive performance as compared with state-of-the-art methods for protein remote homology detection. The learned model is interpretable in terms of biologically meaningful features. In particular, analysis of discriminative words allows the identification of characteristic regions in biological sequences. Because of its high computational efficiency, our method can be applied to ranking of potential homologs in large databases.</p

    Joint Analysis of Multiple Metagenomic Samples

    Get PDF
    The availability of metagenomic sequencing data, generated by sequencing DNA pooled from multiple microbes living jointly, has increased sharply in the last few years with developments in sequencing technology. Characterizing the contents of metagenomic samples is a challenging task, which has been extensively attempted by both supervised and unsupervised techniques, each with its own limitations. Common to practically all the methods is the processing of single samples only; when multiple samples are sequenced, each is analyzed separately and the results are combined. In this paper we propose to perform a combined analysis of a set of samples in order to obtain a better characterization of each of the samples, and provide two applications of this principle. First, we use an unsupervised probabilistic mixture model to infer hidden components shared across metagenomic samples. We incorporate the model in a novel framework for studying association of microbial sequence elements with phenotypes, analogous to the genome-wide association studies performed on human genomes: We demonstrate that stratification may result in false discoveries of such associations, and that the components inferred by the model can be used to correct for this stratification. Second, we propose a novel read clustering (also termed “binning”) algorithm which operates on multiple samples simultaneously, leveraging on the assumption that the different samples contain the same microbial species, possibly in different proportions. We show that integrating information across multiple samples yields more precise binning on each of the samples. Moreover, for both applications we demonstrate that given a fixed depth of coverage, the average per-sample performance generally increases with the number of sequenced samples as long as the per-sample coverage is high enough
    • …
    corecore