95 research outputs found

    How functional programming mattered

    Get PDF
    In 1989 when functional programming was still considered a niche topic, Hughes wrote a visionary paper arguing convincingly ‘why functional programming matters’. More than two decades have passed. Has functional programming really mattered? Our answer is a resounding ‘Yes!’. Functional programming is now at the forefront of a new generation of programming technologies, and enjoying increasing popularity and influence. In this paper, we review the impact of functional programming, focusing on how it has changed the way we may construct programs, the way we may verify programs, and fundamentally the way we may think about programs

    The C++0x "Concepts" Effort

    Full text link
    C++0x is the working title for the revision of the ISO standard of the C++ programming language that was originally planned for release in 2009 but that was delayed to 2011. The largest language extension in C++0x was "concepts", that is, a collection of features for constraining template parameters. In September of 2008, the C++ standards committee voted the concepts extension into C++0x, but then in July of 2009, the committee voted the concepts extension back out of C++0x. This article is my account of the technical challenges and debates within the "concepts" effort in the years 2003 to 2009. To provide some background, the article also describes the design space for constrained parametric polymorphism, or what is colloquially know as constrained generics. While this article is meant to be generally accessible, the writing is aimed toward readers with background in functional programming and programming language theory. This article grew out of a lecture at the Spring School on Generic and Indexed Programming at the University of Oxford, March 2010

    Liveness-Based Garbage Collection for Lazy Languages

    Full text link
    We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data---a subset of reachable data---during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. While this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated (i.e., in WHNF) and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable

    Synbit:Synthesizing Bidirectional Programs using Unidirectional Sketches

    Get PDF

    Fuzzy model predictive control. Complexity reduction by functional principal component analysis

    Get PDF
    En el Control Predictivo basado en Modelo, el controlador ejecuta una optimización en tiempo real para obtener la mejor solución para la acción de control. Un problema de optimización se resuelve para identificar la mejor acción de control que minimiza una función de coste relacionada con las predicciones de proceso. Debido a la carga computacional de los algoritmos, el control predictivo sujeto a restricciones, no es adecuado para funcionar en cualquier plataforma de hardware. Las técnicas de control predictivo son bien conocidos en la industria de proceso durante décadas. Es cada vez más atractiva la aplicación de técnicas de control avanzadas basadas en modelos a otros muchos campos tales como la automatización de edificios, los teléfonos inteligentes, redes de sensores inalámbricos, etc., donde las plataformas de hardware nunca se han conocido por tener una elevada potencia de cálculo. El objetivo principal de esta tesis es establecer una metodología para reducir la complejidad de cálculo al aplicar control predictivo basado en modelos no lineales sujetos a restricciones, utilizando como plataforma, sistemas de hardware de baja potencia de cálculo, permitiendo una implementación basado en estándares de la industria. La metodología se basa en la aplicación del análisis de componentes principales funcionales, proporcionando un enfoque matemáticamente elegante para reducir la complejidad de los sistemas basados en reglas, como los sistemas borrosos y los sistemas lineales a trozos. Lo que permite reducir la carga computacional en el control predictivo basado en modelos, sujetos o no a restricciones. La idea de utilizar sistemas de inferencia borrosos, además de permitir el modelado de sistemas no lineales o complejos, dota de una estructura formal que permite la implementación de la técnica de reducción de la complejidad mencionada anteriormente. En esta tesis, además de las contribuciones teóricas, se describe el trabajo realizado con plantas reales en los que se han llevado a cabo tareas de modelado y control borroso. Uno de los objetivos a cubrir en el período de la investigación y el desarrollo de la tesis ha sido la experimentación con sistemas borrosos, su simplificación y aplicación a sistemas industriales. La tesis proporciona un marco de conocimiento práctico, basado en la experiencia.In Model-based Predictive Control, the controller runs a real-time optimisation to obtain the best solution for the control action. An optimisation problem is solved to identify the best control action that minimises a cost function related to the process predictions. Due to the computational load of the algorithms, predictive control subject to restric- tions is not suitable to run on any hardware platform. Predictive control techniques have been well known in the process industry for decades. The application of advanced control techniques based on models is becoming increasingly attractive in other fields such as building automation, smart phones, wireless sensor networks, etc., as the hardware platforms have never been known to have high computing power. The main purpose of this thesis is to establish a methodology to reduce the computational complexity of applying nonlinear model based predictive control systems subject to constraints, using as a platform hardware systems with low computational power, allowing a realistic implementation based on industry standards. The methodology is based on applying the functional principal component analysis, providing a mathematically elegant approach to reduce the complexity of rule-based systems, like fuzzy and piece wise affine systems, allowing the reduction of the computational load on modelbased predictive control systems, subject or not subject to constraints. The idea of using fuzzy inference systems, in addition to allowing nonlinear or complex systems modelling, endows a formal structure which enables implementation of the aforementioned complexity reduction technique. This thesis, in addition to theoretical contributions, describes the work done with real plants on which tasks of modeling and fuzzy control have been carried out. One of the objectives to be covered for the period of research and development of the thesis has been training with fuzzy systems and their simplification and application to industrial systems. The thesis provides a practical knowledge framework, based on experience

    Functional Programming for Embedded Systems

    Get PDF
    Embedded Systems application development has traditionally been carried out in low-level machine-oriented programming languages like C or Assembler that can result in unsafe, error-prone and difficult-to-maintain code. Functional programming with features such as higher-order functions, algebraic data types, polymorphism, strong static typing and automatic memory management appears to be an ideal candidate to address the issues with low-level languages plaguing embedded systems. However, embedded systems usually run on heavily memory-constrained devices with memory in the order of hundreds of kilobytes and applications running on such devices embody the general characteristics of being (i) I/O- bound, (ii) concurrent and (iii) timing-aware. Popular functional language compilers and runtimes either do not fare well with such scarce memory resources or do not provide high-level abstractions that address all the three listed characteristics. This work attempts to address this gap by investigating and proposing high-level abstractions specialised for I/O-bound, concurrent and timing-aware embedded-systems programs. We implement the proposed abstractions on eagerly-evaluated, statically-typed functional languages running natively on microcontrollers. Our contributions are divided into two parts - Part 1 presents a functional reactive programming language - Hailstorm - that tracks side effects like I/O in its type system using a feature called resource types. Hailstorm’s programming model is illustrated on the GRiSP microcontroller board.Part 2 comprises two papers that describe the design and implementation of Synchron, a runtime API that provides a uniform message-passing framework for the handling of software messages as well as hardware interrupts. Additionally, the Synchron API supports a novel timing operator to capture the notion of time, common in embedded applications. The Synchron API is implemented as a virtual machine - SynchronVM - that is run on the NRF52 and STM32 microcontroller boards. We present programming examples that illustrate the concurrency, I/O and timing capabilities of the VM and provide various benchmarks on the response time, memory and power usage of SynchronVM

    A simple semantics for Haskell overloading

    Get PDF
    As originally proposed, type classes provide overloading and ad-hoc definition, but can still be understood (and implemented) in terms of strictly parametric calculi. This is not true of subsequent extensions of type classes. Functional dependencies and equality constraints allow the satisfiability of predicates to refine typing; this means that the interpretations of equivalent qualified types may not be interconvertible. Overlapping instances and instance chains allow predicates to be satisfied without determining the implementations of their associated class methods, introducing truly non-parametric behavior. We propose a new approach to the semantics of type classes, interpreting polymorphic expressions by the behavior of each of their ground instances, but without requiring that those behaviors be parametrically determined. We argue that this approach both matches the intuitive meanings of qualified types and accurately models the behavior of programsComment: Originally presented at Haskell 201
    • …
    corecore