9,279 research outputs found

    Compiling and securing cryptographic protocols

    Get PDF
    Protocol narrations are widely used in security as semi-formal notations to specify conversations between roles. We define a translation from a protocol narration to the sequences of operations to be performed by each role. Unlike previous works, we reduce this compilation process to well-known decision problems in formal protocol analysis. This allows one to define a natural notion of prudent translation and to reuse many known results from the literature in order to cover more crypto-primitives. In particular this work is the first one to show how to compile protocols parameterised by the properties of the available operations.Comment: A short version was submitted to IP

    An Improved Algorithm for Generating Database Transactions from Relational Algebra Specifications

    Full text link
    Alloy is a lightweight modeling formalism based on relational algebra. In prior work with Fisler, Giannakopoulos, Krishnamurthi, and Yoo, we have presented a tool, Alchemy, that compiles Alloy specifications into implementations that execute against persistent databases. The foundation of Alchemy is an algorithm for rewriting relational algebra formulas into code for database transactions. In this paper we report on recent progress in improving the robustness and efficiency of this transformation

    Machine dependence and reproducibility for coupled climate simulations: the HadGEM3-GC3.1 CMIP Preindustrial simulation

    Get PDF
    When the same weather or climate simulation is run on different high-performance computing (HPC) platforms, model outputs may not be identical for a given initial condition. While the role of HPC platforms in delivering better climate projections is to some extent discussed in the literature, attention is mainly focused on scalability and performance rather than on the impact of machine-dependent processes on the numerical solution. Here we investigate the behaviour of the Preindustrial (PI) simulation prepared by the UK Met Office for the forthcoming CMIP6 (Coupled Model Intercomparison Project Phase 6) under different computing environments. Discrepancies between the means of key climate variables were analysed at different timescales, from decadal to centennial. We found that for the two simulations to be statistically indistinguishable, a 200-year averaging period must be used for the analysis of the results. Thus, constant-forcing climate simulations using the HadGEM3-GC3.1 model are reproducible on different HPC platforms provided that a sufficiently long duration of simulation is used. In regions where El Niño–Southern Oscillation (ENSO) teleconnection patterns were detected, we found large sea surface temperature and sea ice concentration differences on centennial timescales. This indicates that a 100-year constant-forcing climate simulation may not be long enough to adequately capture the internal variability of the HadGEM3-GC3.1 model, despite this being the minimum simulation length recommended by CMIP6 protocols for many MIP (Model Intercomparison Project) experiments. On the basis of our findings, we recommend a minimum simulation length of 200 years whenever possible

    Counterexample-Preserving Reduction for Symbolic Model Checking

    Get PDF
    The cost of LTL model checking is highly sensitive to the length of the formula under verification. We observe that, under some specific conditions, the input LTL formula can be reduced to an easier-to-handle one before model checking. In our reduction, these two formulae need not to be logically equivalent, but they share the same counterexample set w.r.t the model. In the case that the model is symbolically represented, the condition enabling such reduction can be detected with a lightweight effort (e.g., with SAT-solving). In this paper, we tentatively name such technique "Counterexample-Preserving Reduction" (CePRe for short), and finally the proposed technquie is experimentally evaluated by adapting NuSMV

    Effectively Solving NP-SPEC Encodings by Translation to ASP

    Get PDF
    NP-SPEC is a language for specifying problems in NP in a declarative way. Despite the fact that the semantics of the language was given by referring to Datalog with circumscription, which is very close to ASP, so far the only existing implementations are by means of ECLiPSe Prolog and via Boolean satisfiability solvers. In this paper, we present translations from NP-SPEC into ASP, and provide an experimental evaluation of existing implementations and the proposed translations to ASP using various ASP solvers. The results show that translating to ASP clearly has an edge over the existing translation into SAT, which involves an intrinsic grounding process. We also argue that it might be useful to incorporate certain language constructs of NPSPEC into mainstream ASP

    Lazy Model Expansion: Interleaving Grounding with Search

    Full text link
    Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages, like ASP, FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding-on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach
    • …
    corecore