95,135 research outputs found

    Why functional programming really matters

    Get PDF
    The significance of functional programming is revealed as that the feasible approach to language extensibility which it enables is further applicable to programming in general and beyond. The essence of functional programming is its enablement of programmer-defined function-valued functions. The feasibility of language extension by normal programmers depends upon the exclusion of interpretation in favour of direct definition, and higher-order functional programming is the key to enabling definitional rather than interpretational extensions. Functional programming thus offers the opportunity for the exclusion of the interpretation that otherwise pervades programming in general, and may be applicable beyond to analog computing and systems in general. Nevertheless, specific technical challenges need to be met before "totally functional programming" can realise its promises

    Just do it: simple monadic equational reasoning

    Get PDF
    Abstract One of the appeals of pure functional programming is that it is so amenable to equational reasoning. One of the problems of pure functional programming is that it rules out computational effects. Moggi and Wadler showed how to get round this problem by using monads to encapsulate the effects, leading in essence to a phase distinction-a pure functional evaluation yielding an impure imperative computation. Still, it has not been clear how to reconcile that phase distinction with the continuing appeal of functional programming; does the impure imperative part become inaccessible to equational reasoning? We think not; and to back that up, we present a simple axiomatic approach to reasoning about programs with computational effects

    Fully Abstract Translations Between Functional Languages

    Get PDF
    We examine the problem of finding fully abstract translations between programming languages, i.e., translations that preserve code equivalence and nonequivalence. We present three examples of fully abstract translations: one from call-by-value to lazy PCF, one from call-by name to call-by-value PCF, and one from lazy to call-by-value PCF. The translations yield upper and lower bounds on decision procedures for proving equivalences of code. We finally define a notion of functional translation that captures the essence of the proofs of full abstraction, and show that some languages cannot be translated into others

    Dependently typed array programs don't go wrong

    Get PDF
    AbstractThe array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming imposes non-trivial structural constraints on ranks, shapes, and element values of arrays. A prominent example where such constraints are violated are out-of-bound array accesses. Usually, such constraints are enforced by means of run time checks. Both the run time overhead inflicted by dynamic constraint checking and the uncertainty of proper program evaluation are undesirable.We propose a novel type system for array programs based on dependent types. Our type system makes dynamic constraint checks obsolete and guarantees orderly evaluation of well-typed programs. We employ integer vectors of statically unknown length to index array types. We also show how constraints on these vectors are resolved using a suitable reduction to integer scalars. Our presentation is based on a functional array calculus that captures the essence of the paradigm without the legacy and obfuscation of a fully-fledged array programming language

    The essence of component-based design and coordination

    Full text link
    Is there a characteristic of coordination languages that makes them qualitatively different from general programming languages and deserves special academic attention? This report proposes a nuanced answer in three parts. The first part highlights that coordination languages are the means by which composite software applications can be specified using components that are only available separately, or later in time, via standard interfacing mechanisms. The second part highlights that most currently used languages provide mechanisms to use externally provided components, and thus exhibit some elements of coordination. However not all do, and the availability of an external interface thus forms an objective and qualitative criterion that distinguishes coordination. The third part argues that despite the qualitative difference, the segregation of academic attention away from general language design and implementation has non-obvious cost trade-offs.Comment: 8 pages, 2 figures, 3 table

    A Swiss Pocket Knife for Computability

    Get PDF
    This research is about operational- and complexity-oriented aspects of classical foundations of computability theory. The approach is to re-examine some classical theorems and constructions, but with new criteria for success that are natural from a programming language perspective. Three cornerstones of computability theory are the S-m-ntheorem; Turing's "universal machine"; and Kleene's second recursion theorem. In today's programming language parlance these are respectively partial evaluation, self-interpretation, and reflection. In retrospect it is fascinating that Kleene's 1938 proof is constructive; and in essence builds a self-reproducing program. Computability theory originated in the 1930s, long before the invention of computers and programs. Its emphasis was on delimiting the boundaries of computability. Some milestones include 1936 (Turing), 1938 (Kleene), 1967 (isomorphism of programming languages), 1985 (partial evaluation), 1989 (theory implementation), 1993 (efficient self-interpretation) and 2006 (term register machines). The "Swiss pocket knife" of the title is a programming language that allows efficient computer implementation of all three computability cornerstones, emphasising the third: Kleene's second recursion theorem. We describe experiments with a tree-based computational model aiming for both fast program generation and fast execution of the generated programs.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
    corecore