62 research outputs found

    Elaborating Inductive Definitions

    Full text link
    We present an elaboration of inductive definitions down to a universe of datatypes. The universe of datatypes is an internal presentation of strictly positive families within type theory. By elaborating an inductive definition -- a syntactic artifact -- to its code -- its semantics -- we obtain an internalized account of inductives inside the type theory itself: we claim that reasoning about inductive definitions could be carried in the type theory, not in the meta-theory as it is usually the case. Besides, we give a formal specification of that elaboration process. It is therefore amenable to formal reasoning too. We prove the soundness of our translation and hint at its correctness with respect to Coq's Inductive definitions. The practical benefits of this approach are numerous. For the type theorist, this is a small step toward bootstrapping, ie. implementing the inductive fragment in the type theory itself. For the programmer, this means better support for generic programming: we shall present a lightweight deriving mechanism, entirely definable by the programmer and therefore not requiring any extension to the type theory.Comment: 32 pages, technical repor

    Transporting Functions across Ornaments

    Get PDF
    Programming with dependent types is a blessing and a curse. It is a blessing to be able to bake invariants into the definition of data-types: we can finally write correct-by-construction software. However, this extreme accuracy is also a curse: a data-type is the combination of a structuring medium together with a special purpose logic. These domain-specific logics hamper any effort of code reuse among similarly structured data. In this paper, we exorcise our data-types by adapting the notion of ornament to our universe of inductive families. We then show how code reuse can be achieved by ornamenting functions. Using these functional ornament, we capture the relationship between functions such as the addition of natural numbers and the concatenation of lists. With this knowledge, we demonstrate how the implementation of the former informs the implementation of the latter: the user can ask the definition of addition to be lifted to lists and she will only be asked the details necessary to carry on adding lists rather than numbers. Our presentation is formalised in a type theory with a universe of data-types and all our constructions have been implemented as generic programs, requiring no extension to the type theory

    Normalization by realizability also evaluates

    Get PDF
    National audienceFor those of us that generally live in the world of syntax, semanticproof methods such as realizability or logical relations /parametricity sometimes feel like magic. Why do they work? At whichpoint in the proof is "the real work" done?Bernardy and Lasson express realizability and parametricity models assyntactic model -- but the abstraction/adequacy theorems are stillexplained as meta-level proofs. Hoping to better understand the prooftechnique, we look at those theorems as programs themselves. How doesa normalization proof using realizability actually computes thosenormal forms?This detective work is an early stage and we propose a first attemptin a simple setting. Instead of arbitrary Pure Type Systems, we usethe minimal negative propositional logic (arrows only). Instead ofstarting from the simply-typed lambda-calculus, we work onsequent-style terms in a simple subset of the Curien-HerbelinL calculus.Pour ceux d'entre nous qui vivent dans le monde de la syntaxe, les techniques de preuve sémantiques, comme la réalisabilité, les relations logiques ou la paramétricité ont parfois un arrière-goût de méthode magique. Pourquoi fonctionnent-elles ? Quel est le point clé de la preuve ? Bernardy et Lasson expriment la parametricité comme une construction de modèle syntaxique, par traduction bien typée, mais leurs théorèmes d'abstracion et adéquation restent des résultats au méta-niveau. Dans l'espoir de mieux comprendre ces résultats, nous étudiont leurs preuves comme des programmes. Les preuves de normalization par réalisabilité calculent-elles effectivement des formes normales, comment et à quel moment ? Ce travail de détective est encore à ses débuts, et nous proposons une première tentative dans un cadre très simple : au lieu de Pure Type Systems (PTS), nous utilisons le lambda-calcul simplement typé

    From Sets to Bits in Coq

    Get PDF
    International audienceComputer Science abounds in folktales about how — in the early days of computer programming — bit vectors were ingeniously used to encode and manipulate finite sets. Algorithms have thus been developed to minimize memory footprint and maximize efficiency by taking advantage of microarchitectural features. With the development of automated and interactive theorem provers, finite sets have also made their way into the libraries of formalized mathematics. Tailored to ease proving , these representations are designed for symbolic manipulation rather than computational efficiency. This paper aims to bridge this gap. In the Coq proof assistant, we implement a bitset library and prove its correct-ness with respect to a formalization of finite sets. Our library enables a seamless interaction of sets for computing — bitsets — and sets for proving — finite sets

    Vérification de la génération modulaire du code impératif pour Lustre

    Get PDF
    National audienceLes langages synchrones sont utilisés pour programmer des logiciels de contrôle-commande d'applications critiques. Le langage Scade, utilisé dans l'industrie pour ces applications, est fondé sur le langage Lustre introduit par Caspi et Halbwachs. On s'intéresse ici à la formalisation et la preuve, dans l'assistant de preuve Coq, d'une étape clef de la compilation : la traduction de programmes Lustre vers des programmes d'un langage impératif. Le défi est de passer d'une sémantique synchrone flot de données, où un programme manipule des flots, à une sémantique impérative, où un programme manipule la mémoire de façon séquentielle. Nous spécifions et vérifions un générateur de code simple qui gère les traits principaux de Lustre : l'échantillonnage, les noeuds et les délais. La preuve utilise un modèle sémantique intermédiaire qui mélange des traits flot de données et impératifs et permet de définir un invariant inductif essentiel. Nous exploitons la formalisation proposée pour vérifier une optimisation classique qui fusionne des structures conditionnelles dans le code impératif généré

    Fully Abstract Compilation to JavaScript

    Get PDF
    International audienceMany tools allow programmers to develop applications in high-level languages and deploy them in web browsers via compilation to JS. While practical and widely used, these compilers are ad hoc: no guarantee is provided on their correctness for whole programs, nor their security for programs executed within arbitrary JS contexts. This paper presents a compiler with such guarantees. We compile an ML-like language with higher-order functions and references to JS, while preserving all source program properties. Relying on type-based invariants and applicative bisimilarity, we show full abstraction: two programs are equivalent in all source contexts if and only if their wrapped translations are equivalent in all JS contexts. We evaluate our compiler on sample programs, including a series of secure libraries

    Prediction of alternative isoforms from exon expression levels in RNA-Seq experiments

    Get PDF
    Alternative splicing, polyadenylation of pre-messenger RNA molecules and differential promoter usage can produce a variety of transcript isoforms whose respective expression levels are regulated in time and space, thus contributing specific biological functions. However, the repertoire of mammalian alternative transcripts and their regulation are still poorly understood. Second-generation sequencing is now opening unprecedented routes to address the analysis of entire transcriptomes. Here, we developed methods that allow the prediction and quantification of alternative isoforms derived solely from exon expression levels in RNA-Seq data. These are based on an explicit statistical model and enable the prediction of alternative isoforms within or between conditions using any known gene annotation, as well as the relative quantification of known transcript structures. Applying these methods to a human RNA-Seq dataset, we validated a significant fraction of the predictions by RT-PCR. Data further showed that these predictions correlated well with information originating from junction reads. A direct comparison with exon arrays indicated improved performances of RNA-Seq over microarrays in the prediction of skipped exons. Altogether, the set of methods presented here comprehensively addresses multiple aspects of alternative isoform analysis. The software is available as an open-source R-package called Solas at http://cmb.molgen.mpg.de/2ndGenerationSequencing/Solas/

    A Formally Verified Compiler for Lustre

    Get PDF
    International audienceThe correct compilation of block diagram languages like Lustre, Scade, and a discrete subset of Simulink is important since they are used to program critical embedded control software. We describe the specification and verification in an Interactive Theorem Prover of a compilation chain that treats the key aspects of Lustre: sampling, nodes, and delays. Building on CompCert, we show that repeated execution of the generated assembly code faithfully implements the dataflow semantics of source programs.We resolve two key technical challenges. The first is the change from a synchronous dataflow semantics, where programs manipulate streams of values, to an imperative one, where computations manipulate memory sequentially. The second is the verified compilation of an imperative language with encapsulated state to C code where the state is realized by nested records. We also treat a standard control optimization that eliminates unnecessary conditional statements

    SKIVA: Flexible and Modular Side-channel and Fault Countermeasures

    Get PDF
    We describe SKIVA, a customized 32-bit processor enabling the design of software countermeasures for a broad range of implementation attacks covering fault injection and side-channel analysis of timing-based and power-based leakage. We design the countermeasures as variants of bitslice programming. Our protection scheme is flexible and modular, allowing us to combine higher-order masking -- fending off side-channel analysis -- with complementary spatial and temporal redundancy -- protecting against fault injection. Multiple configurations of side-channel and fault protection enable the programmer to select the desired number of shares and the desired redundancy level for each slice. Recurring and security-sensitive operations are supported in hardware through a custom instruction set extension. The new instructions support bitslicing, secret-share generation, redundant logic computation, and fault detection. We demonstrate and analyze multiple versions of AES from a side-channel analysis and a fault-injection perspective, in addition to providing a detailed performance evaluation of the protected designs

    Custom Instruction Support for Modular Defense against Side-channel and Fault Attacks

    Get PDF
    International audienceThe design of software countermeasures against active and passive adversaries is a challenging problem that has been addressed by many authors in recent years. The proposed solutions adopt a theoretical foundation (such as a leakage model) but often do not offer concrete reference implementations to validate the foundation. Contributing to the experimental dimension of this body of work, we propose a customized processor called SKIVA that supports experiments with the design of countermeasures against a broad range of implementation attacks. Based on bitslice programming and recent advances in the literature, SKIVA offers a flexible and modular combination of countermeasures against power-based and timing-based side-channel leakage and fault injection. Multiple configurations of side-channel protection and fault protection enable the programmer to select the desired number of shares and the desired redundancy level for each slice. Recurring and security-sensitive operations are supported in hardware through custom instruction-set extensions. The new instructions support bitslicing, secret-share generation, redundant logic computation, and fault detection. We demonstrate and analyze multiple versions of AES from a side-channel analysis and a fault-injection perspective, in addition to providing a detailed performance evaluation of the protected designs. To our knowledge, this is the first validated end-to-end implementation of a modular bitslice-oriented countermeasure
    corecore