70 research outputs found

    Towards a software product line of trie-based collections

    Get PDF
    Collection data structures in standard libraries of programming languages are designed to excel for the average case by carefully balancing memory footprint and runtime performance. These implicit design decisions and hard-coded trade-offs do constrain users from using an optimal variant for a given problem. Although a wide range of specialized collections is available for the Java Virtual Machine (JVM), they introduce yet another dependency and complicate user adoption by requiring specific Application Program Interfaces (APIs) incompatible with the standard library. A product line for collection data structures would relieve library designers from optimizing for the general case. Furthermore, a product line allows evolving the potentially large code base of a collection family efficiently. The challenge is to find a small core framework for collection data structures which covers all variations without exhaustively listing them, while supporting good performance at the same time. We claim that the concept of Array Mapped Tries (AMTs) embodies a high degree of commonality in the sub-domain of immutable collection data structures. AMTs are flexible enough to cover most of the variability, while minimizing code bloat in the generator and the generated code. We implemented a Data Structure Code Generator (DSCG) that emits immutable collections based on an AMT skeleton foundation. The generated data structures outperform competitive handoptimized implementations, and the generator still allows for customization towards specific workloads

    A principled approach to programming with nested types in Haskell

    Get PDF
    Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant foundation for programming with nested types in Haskell

    Worker/wrapper/makes it/faster

    Get PDF
    Much research in program optimization has focused on formal approaches to correctness: proving that the meaning of programs is preserved by the optimisation. Paradoxically, there has been comparatively little work on formal approaches to efficiency: proving that the performance of optimized programs is actually improved. This paper addresses this problem for a general-purpose optimization technique, the worker/wrapper transformation. In particular, we use the call-by-need variant of improvement theory to establish conditions under which the worker/wrapper transformation is formally guaranteed to preserve or improve the time performance of programs in lazy languages such as Haskell

    Extended Call-by-Push-Value: Reasoning About Effectful Programs and Evaluation Order

    Get PDF
    Traditionally, reasoning about programs under varying evaluation regimes (call-by-value, call-by-name etc.) was done at the meta-level, treating them as term rewriting systems. Levy’s call-by-push-value (CBPV) calculus provides a more powerful approach for reasoning, by treating CBPV terms as a common intermediate language which captures both call-by-value and call-by-name, and by allowing equational reasoning about changes to evaluation order between or within programs. We extend CBPV to additionally deal with call-by-need, which is non-trivial because of shared reductions. This allows the equational reasoning to also support call-by-need. As an example, we then prove that call-by-need and call-by-name are equivalent if nontermination is the only side-effect in the source language. We then show how to incorporate an effect system. This enables us to exploit static knowledge of the potential effects of a given expression to augment equational reasoning; thus a program fragment might be invariant under change of evaluation regime only because of knowledge of its effects

    Realizability Interpretation and Normalization of Typed Call-by-Need λ\lambda-calculus With Control

    Get PDF
    We define a variant of realizability where realizers are pairs of a term and a substitution. This variant allows us to prove the normalization of a simply-typed call-by-need \lambda$-$calculus with control due to Ariola et al. Indeed, in such call-by-need calculus, substitutions have to be delayed until knowing if an argument is really needed. In a second step, we extend the proof to a call-by-need \lambda-calculus equipped with a type system equivalent to classical second-order predicate logic, representing one step towards proving the normalization of the call-by-need classical second-order arithmetic introduced by the second author to provide a proof-as-program interpretation of the axiom of dependent choice

    Automated Amortised Resource Analysis for Term Rewrite Systems

    Get PDF
    Based on earlier work on amortised resource analysis, we establish a novel automated amortised resource analysis for term rewrite systems. The method is presented in an inference system akin to a type system and gives rise to polynomial bounds on the innermost runtime complexity of the analysed term rewrite system. Our analysis does not restrict the input rewrite system in any way. This facilitates integration in a general framework for resource analysis of programs. In particular, we have implemented the method and integrated it into our tool TCT.(VLID)2581042Accepted versio

    Vitamin D and its role in psoriasis: An overview of the dermatologist and nutritionist

    Get PDF
    • 

    corecore