72 research outputs found

    Encoding Types in ML-like Languages (Preliminary Version)

    Get PDF
    A Hindley-Milner type system such as ML's seems to prohibit typeindexed values, i.e., functions that map a family of types to a family of values. Such functions generally perform case analysis on the input types and return values of possibly different types. The goal of our work is to demonstrate how to program with type-indexed values within a Hindley-Milner type system.Our first approach is to interpret an input type as its correspondingvalue, recursively. This solution is type-safe, in the sense that the ML type system statically prevents any mismatch between the input type and function arguments that depend on this type.Such specific type interpretations, however, prevent us from combining different type-indexed values that share the same type. To meet this objection, we focus on finding a value-independent type encoding that can be shared by different functions. We propose and compare two solutions. One requires first-class and higher-order polymorphism, and, thus, is not implementable in the core language of ML, but itcan be programmed using higher-order functors in Standard ML of New Jersey. Its usage, however, is clumsy. The other approach uses embedding/projection functions. It appears to be more practical. We demonstrate the usefulness of type-indexed values through examples including type-directed partial evaluation, C printf-like formatting, and subtype coercions. Finally, we discuss the tradeoffs between our approach and some other solutions based on more expressive typing disciplines

    Structural foundations for differentiable programming

    Get PDF
    This dissertation supports the broader thesis that categorical semantics is a powerful tool to study and design programming languages. It focuses on the foundational aspects of differentiable programming in a simply typed functional setting. Although most of the category theory used can be boiled down to a more elementary presentation, its influence was certainly key in obtaining the results presented in this dissertation. The conciseness of certain proofs and the compactness of certain definitions and insights were made easier thanks to my background in category theory. Backpropagation is the key algorithm that allows fast learning on neural networks. It enabled some of the impressive recent advancements in machine learning. With models of increasing complexity, data structures equally complex are required, which calls for the ability to go beyond standard differentiability. This emerging generalization was coined as differentiable programming. The idea is to allow users to write expressive programs representing (a generalization of) differentiable functions, whose gradient computation can be automated using automatic differentiation. In this dissertation, I lay some foundations for differentiable programming. This is done in three ways. Firstly, I present a simple higher-order functional language and define automatic differentiation as a structure-preserving program transformation. The language is given a denotational semantics using diffeological spaces, and it is shown that the transformation is correct, i.e. that AD produces programs that do compute gradients of the original programs, using a logical relations argument. Secondly, I extend the language from the previously described chapter to introduce new expressive program constructs such as conditionals and recursion. In such a setting, even first-order programs may represent functions that need not be differentiable. I introduce better-fitted denotational semantics for such a language and show how to extend AD to such a setting and what guarantees about AD now hold. This extended language models the more realistic needs in expressiveness that can be found in the literature, e.g. in modern probabilistic programming languages. Thirdly, I present detailed applications of the developed theory. I first show a general recipe for extending AD to non-trivial new types and new primitives. I then show how the guarantees about AD are sufficient for usage in certain applications, such as the change of variable formula of stochastic gradient descent, but how it may not be sufficient, for instance, in simple gradient descent. Finally, more applications in the specific context of probabilistic programming are explored. First, a denotational proof that the trace semantics of a probabilistic program is almost everywhere differentiable is given. Second, a characterization of posterior distributions of probabilistic programs valued in Euclidean spaces is obtained: they have densities with respect to (w.r.t.) some sum-of-Hausdorff measure on a countable union of smooth manifolds. Overall, these contributions give us better insights into differentiable programming. They form a foundational setting to study the differentiability-like properties of realistic complex programs, beyond usual settings such as differentiability or convexity. They give general recipes to prove some properties of such programs and modularly extend automatic differentiation to richer contexts with new types and primitives

    Combining behavioural types with security analysis

    Get PDF
    Today's software systems are highly distributed and interconnected, and they increasingly rely on communication to achieve their goals; due to their societal importance, security and trustworthiness are crucial aspects for the correctness of these systems. Behavioural types, which extend data types by describing also the structured behaviour of programs, are a widely studied approach to the enforcement of correctness properties in communicating systems. This paper offers a unified overview of proposals based on behavioural types which are aimed at the analysis of security properties
    • …
    corecore