90 research outputs found

    Z2SAL: a translation-based model checker for Z

    No full text
    Despite being widely known and accepted in industry, the Z formal specification language has not so far been well supported by automated verification tools, mostly because of the challenges in handling the abstraction of the language. In this paper we discuss a novel approach to building a model-checker for Z, which involves implementing a translation from Z into SAL, the input language for the Symbolic Analysis Laboratory, a toolset which includes a number of model-checkers and a simulator. The Z2SAL translation deals with a number of important issues, including: mapping unbounded, abstract specifications into bounded, finite models amenable to a BDD-based symbolic checker; converting a non-constructive and piecemeal style of functional specification into a deterministic, automaton-based style of specification; and supporting the rich set-based vocabulary of the Z mathematical toolkit. This paper discusses progress made towards implementing as complete and faithful a translation as possible, while highlighting certain assumptions, respecting certain limitations and making use of available optimisations. The translation is illustrated throughout with examples; and a complete working example is presented, together with performance data

    Refinement-based verification of sequential implementations of Stateflow charts

    Get PDF
    Simulink/Stateflow charts are widely used in industry for the specification of control systems, which are often safety-critical. This suggests a need for a formal treatment of such models. In previous work, we have proposed a technique for automatic generation of formal models of Stateflow blocks to support refinement-based reasoning. In this article, we present a refinement strategy that supports the verification of automatically generated sequential C implementations of Stateflow charts. In particular, we discuss how this strategy can be specialised to take advantage of architectural features in order to allow a higher level of automation.Comment: In Proceedings Refine 2011, arXiv:1106.348

    Robust Biomarkers: Methodologically Tracking Causal Processes in Alzheimer’s Measurement

    Get PDF
    In biomedical measurement, biomarkers are used to achieve reliable prediction of, and useful causal information about patient outcomes while minimizing complexity of measurement, resources, and invasiveness. A biomarker is an assayable metric that discloses the status of a biological process of interest, be it normative, pathophysiological, or in response to intervention. The greatest utility from biomarkers comes from their ability to help clinicians (and researchers) make and evaluate clinical decisions. In this paper we discuss a specific methodological use of clinical biomarkers in pharmacological measurement: Some biomarkers, called ‘surrogate markers’, are used to substitute for a clinically meaningful endpoint corresponding to events and their penultimate risk factors. We confront the reliability of clinical biomarkers that are used to gather information about clinically meaningful endpoints. Our aim is to present a systematic methodology for assessing the reliability of multiple surrogate markers (and biomarkers in general). To do this we draw upon the robustness analysis literature in the philosophy of science and the empirical use of clinical biomarkers. After introducing robustness analysis we present two problems with biomarkers in relation to reliability. Next, we propose an intervention-based robustness methodology for organizing the reliability of biomarkers in general. We propose three relevant conditions for a robust methodology for biomarkers: (R1) Intervention-based demonstration of partial independence of modes: In biomarkers partial independence can be demonstrated through exogenous interventions that modify a process some number of “steps” removed from each of the markers. (R2) Comparison of diverging and converging results across biomarkers: By systematically comparing partially-independent biomarkers we can track under what conditions markers fail to converge in results, and under which conditions they successfully converge. (R3) Information within the context of theory: Through a systematic cross-comparison of the markers we can make causal conclusions as well as eliminate competing theories. We apply our robust methodology to currently developing Alzheimer’s research to show its usefulness for making causal conclusions

    Mutator Suppression and Escape from Replication Error–Induced Extinction in Yeast

    Get PDF
    Cells rely on a network of conserved pathways to govern DNA replication fidelity. Loss of polymerase proofreading or mismatch repair elevates spontaneous mutation and facilitates cellular adaptation. However, double mutants are inviable, suggesting that extreme mutation rates exceed an error threshold. Here we combine alleles that affect DNA polymerase δ (Pol δ) proofreading and mismatch repair to define the maximal error rate in haploid yeast and to characterize genetic suppressors of mutator phenotypes. We show that populations tolerate mutation rates 1,000-fold above wild-type levels but collapse when the rate exceeds 10−3 inactivating mutations per gene per cell division. Variants that escape this error-induced extinction (eex) rapidly emerge from mutator clones. One-third of the escape mutants result from second-site changes in Pol δ that suppress the proofreading-deficient phenotype, while two-thirds are extragenic. The structural locations of the Pol δ changes suggest multiple antimutator mechanisms. Our studies reveal the transient nature of eukaryotic mutators and show that mutator phenotypes are readily suppressed by genetic adaptation. This has implications for the role of mutator phenotypes in cancer

    Efficient Binary Transfer of Pointer Structures

    No full text
    This paper presents a pair of algorithms for output and input of pointer structures in binary format. Both algorithms operate in linear space and time. They have been inspired by copying compacting garbage collection algorithms, and make similar assumptions about the representations of pointer structures. In real programs, the transfer of entire pointer structures is often inappropriate. The algorithms are extended to lazily transfer partitions of a pointer structure: the receiver requests partitions when it needs them. A remote procedure call mechanism is presented that uses the binary transfer algorithms for communicating arguments and results. A use of this as an enabling mechanism in the implementation of a software engineering environment is discussed. 1. The Problem Many programs build data structures that consist of pieces of primary memory (often called records, structures or nodes) linked by memory addresses (often called pointers or references). Such linked data structures, ..
    corecore