31,354 research outputs found

    A Tamper and Leakage Resilient von Neumann Architecture

    Get PDF
    We present a universal framework for tamper and leakage resilient computation on a von Neumann Random Access Architecture (RAM in short). The RAM has one CPU that accesses a storage, which we call the disk. The disk is subject to leakage and tampering. So is the bus connecting the CPU to the disk. We assume that the CPU is leakage and tamper-free. For a fixed value of the security parameter, the CPU has constant size. Therefore the code of the program to be executed is stored on the disk, i.e., we consider a von Neumann architecture. The most prominent consequence of this is that the code of the program executed will be subject to tampering. We construct a compiler for this architecture which transforms any keyed primitive into a RAM program where the key is encoded and stored on the disk along with the program to evaluate the primitive on that key. Our compiler only assumes the existence of a so-called continuous non-malleable code, and it only needs black-box access to such a code. No further (cryptographic) assumptions are needed. This in particular means that given an information theoretic code, the overall construction is information theoretic secure. Although it is required that the CPU is tamper and leakage proof, its design is independent of the actual primitive being computed and its internal storage is non-persistent, i.e., all secret registers are reset between invocations. Hence, our result can be interpreted as reducing the problem of shielding arbitrary complex computations to protecting a single, simple yet universal component

    The Mystro system: A comprehensive translator toolkit

    Get PDF
    Mystro is a system that facilities the construction of compilers, assemblers, code generators, query interpretors, and similar programs. It provides features to encourage the use of iterative enhancement. Mystro was developed in response to the needs of NASA Langley Research Center (LaRC) and enjoys a number of advantages over similar systems. There are other programs available that can be used in building translators. These typically build parser tables, usually supply the source of a parser and parts of a lexical analyzer, but provide little or no aid for code generation. In general, only the front end of the compiler is addressed. Mystro, on the other hand, emphasizes tools for both ends of a compiler

    Continuously non-malleable codes with split-state refresh

    Get PDF
    Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible. In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism. An additional feature of our security model is that it captures directly security against continual leakage attacks. We give an abstract framework for building such codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption. Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient RAM programs. In comparison to other tamper-resilient compilers, ours has several advantages, among which the fact that, for the first time, it does not rely on the self-destruct feature

    A Swiss Pocket Knife for Computability

    Get PDF
    This research is about operational- and complexity-oriented aspects of classical foundations of computability theory. The approach is to re-examine some classical theorems and constructions, but with new criteria for success that are natural from a programming language perspective. Three cornerstones of computability theory are the S-m-ntheorem; Turing's "universal machine"; and Kleene's second recursion theorem. In today's programming language parlance these are respectively partial evaluation, self-interpretation, and reflection. In retrospect it is fascinating that Kleene's 1938 proof is constructive; and in essence builds a self-reproducing program. Computability theory originated in the 1930s, long before the invention of computers and programs. Its emphasis was on delimiting the boundaries of computability. Some milestones include 1936 (Turing), 1938 (Kleene), 1967 (isomorphism of programming languages), 1985 (partial evaluation), 1989 (theory implementation), 1993 (efficient self-interpretation) and 2006 (term register machines). The "Swiss pocket knife" of the title is a programming language that allows efficient computer implementation of all three computability cornerstones, emphasising the third: Kleene's second recursion theorem. We describe experiments with a tree-based computational model aiming for both fast program generation and fast execution of the generated programs.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Action semantics in retrospect

    Get PDF
    This paper is a themed account of the action semantics project, which Peter Mosses has led since the 1980s. It explains his motivations for developing action semantics, the inspirations behind its design, and the foundations of action semantics based on unified algebras. It goes on to outline some applications of action semantics to describe real programming languages, and some efforts to implement programming languages using action semantics directed compiler generation. It concludes by outlining more recent developments and reflecting on the success of the action semantics project
    corecore