488 research outputs found

    An intensional implementation technique for functional languages

    Get PDF
    The potential of functional programming languages has not been widely accepted yet. The reason lies in the difficulties associated with their implementation. In this dissertation we propose a new implementation technique for functional languages by compiling them into 'Intensional Logic' of R. Montague and R. Carnap. Our technique is not limited to a particular hardware or to a particular evaluation strategy; nevertheless it lends itself directly to demand-driven tagged dataflow architecture. Even though our technique can handle conventional languages as well, our main interest is exclusively with functional languages in general and with Lucid-like dataflow languages in particular. We give a brief general account of intensional logic and then introduce the concept of intensional algebras as structures (models) for intensional logic. We, formally, show the computability requirements for such algebras. The target language of our compilation is the family of languages DE (definitional equations over intensional expressions). A program in DE is a linear (not structured) set of non-ambiguous equations defining nullary variable symbols. One of these variable symbols should be the symbol result. We introduce the compilation of Iswim (a first order variant of Landin's ISWIM) as an example of compiling functions into intensional expressions. A compilation algorithm is given. Iswim(A), for any algebra of data types A, is compiled into DE(Flo(A)) where Flo(A) is a uniquely defined intensional algebra over the tree of function calls. The approach is extended to compiling Luswim and Lucid. We describe the demand-driven tagged dataflow (the eduction) approach to evaluating the intensional family of target languages DE. Furthermore, for each intensional algebra, we introduce a collection of rewrite rules. A justification of correctness is given. These rules are the basis for evaluating programs in the target DE by reduction. Finally, we discuss possible refinements and extensions to our approach

    Fault tolerant architectures for integrated aircraft electronics systems, task 2

    Get PDF
    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported

    KungFu: Making Training in Distributed Machine Learning Adaptive

    Get PDF
    When using distributed machine learning (ML) systems to train models on a cluster of worker machines, users must con-figure a large number of parameters: hyper-parameters (e.g. the batch size and the learning rate) affect model convergence; system parameters (e.g. the number of workers and their communication topology) impact training performance. In current systems, adapting such parameters during training is ill-supported. Users must set system parameters at deployment time, and provide fixed adaptation schedules for hyper-parameters in the training program. We describe Kung Fu, a distributed ML library for Tensor-Flow that is designed to enable adaptive training. Kung Fu allows users to express high-level Adaptation Policies(APs)that describe how to change hyper- and system parameters during training. APs take real-time monitored metrics (e.g. signal-to-noise ratios and noise scale) as input and trigger control actions (e.g. cluster rescaling or synchronisation strategy updates). For execution, APs are translated into monitoring and control operators, which are embedded in the data flowgraph. APs exploit an efficient asynchronous collective communication layer, which ensures concurrency and consistency of monitoring and adaptation operation

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Intensional semantics for purely functional first-order lazy data structures

    Get PDF
    Ο σκοπός αυτής της εργασίας είναι να προτείνουμε μία νοηματική σημασιολογία για οκνηρές συναρτησιακές δομές δεδομένων πρώτης τάξης. Στο θεωρητικό μέρος, ο στόχος μας είναι να ορίσουμε μία τέτοια σημασιολογία στο πλαίσιο μίας συναρτησιακής γλώσσας πρώτης τάξης με δομές δεδομένων, και να αποδείξουμε ότι είναι ισοδύναμη με τη συνήθη εκτατική σημασιολογία, τόσο για να μελετήσουμε τις ιδιότητες των οκνηρών συναρτησιακών δομών δεδομένων όσο και για να παρέχουμε μία ισχυρή θεωρητική βάση και καθοδήγηση για πιθανές υλοποιήσεις. Στο πρακτικό μέρος, περιγράφουμε μία μη-τετριμμένη υλοποίηση της σημασιολογίας, ενοποιημένη με τη νοηματική σημασιολογία για συναρτήσεις. Τέλος, συζητούμε πώς η σημασιολογία μας μπορεί να ενσωματωθεί στη νοηματική σημασιολογία για συναρτήσεις για να προκύψει μία πλήρως νοηματική σημασιολογία για συναρτησιακές γλώσσες πρώτης τάξης με δομές δεδομένων, και πώς μπορεί να επεκταθεί για να υποστηρίξει συναρτησιακές γλώσσες υψηλότερης τάξης.The purpose of this thesis is to propose an intensional semantics for purely functional first-order lazy data structures. On the theoretical side, our goal is to formally define such a semantics in the context of a first-order functional language with data structures, and prove that it is equivalent to the usual extensional semantics, both to study the properties of lazy functional data structures and to provide a robust theoretical foundation and guide for possible implementations. From a practical point of view, we describe a non-trivial implementation of our semantics that is integrated with the intensional semantics for functions. Finally, we discuss how our semantics can be formally incorporated into the intensional semantics for functions to obtain a fully intensional semantics for first-order functional languages with data structures, and how it can be extended to support higher-order functions
    corecore