52 research outputs found

    In-place Graph Rewriting with Interaction Nets

    Full text link
    An algorithm is in-place, or runs in-situ, when it does not need any additional memory to execute beyond a small constant amount. There are many algorithms that are efficient because of this feature, therefore it is an important aspect of an algorithm. In most programming languages, it is not obvious when an algorithm can run in-place, and moreover it is often not clear that the implementation respects that idea. In this paper we study interaction nets as a formalism where we can see directly, visually, that an algorithm is in-place, and moreover the implementation will respect that it is in-place. Not all algorithms can run in-place however. We can nevertheless still use the same language, but now we can annotate parts of the algorithm that can run in-place. We suggest an annotation for rules, and give an algorithm to find this automatically through analysis of the interaction rules.Comment: In Proceedings TERMGRAPH 2016, arXiv:1609.0301

    Design and implementation of a low-level language for interaction nets

    Get PDF
    Interaction nets are a graphical model of computation based on a restricted form of graph rewriting. A specific net can represent a program with a user-defined set of nodes and computation is modelled by a user-defined set of rewrite rules. This very simple model has had great success in modelling sharing in computation (specifically in the lambda calculus), and there is potential for generating a new theoretical foundation of parallel computation since all computation steps are local and thus can be implemented in parallel. This thesis is about the implementation of interaction nets. Specifically, for the first contributions we define a low-level language as an object language for the compilation of interaction nets. We study the efficiency and properties of different data structures, and focus on the management of the rewriting process which is usually hidden in the graph rewriting system. We provide experimental data comparing the different choices of data structures and select one for further development. For the compilation of nets and rules into this language, we show an optimisation such that allocated memory for agents is reused, and thus we obtain optimal efficiency for the rewriting process. The second part of this thesis describes extensions of interaction nets so that they can be used as a programming language. Interaction nets in their pure form are quite restrictive in expressive power. By extending the notions of agents and rules we can express computation more naturally, yet still preserve the good properties (such as strong confluence) of the rewriting system. We then implement a selection of algorithms using and extending the compilation techniques developed in the first part of the thesis. We also demonstrate experimental results on multi-core CPUs, using the Posix-thread library, thus realising some of the potential for parallel implementation mentioned above

    XARK: an extensible framework for automatic recognition of computational kernels

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in ACM Transactions on Programming Languages and Systems. The final authenticated version is available online at: http://dx.doi.org/10.1145/1391956.1391959[Abstract] The recognition of program constructs that are frequently used by software developers is a powerful mechanism for optimizing and parallelizing compilers to improve the performance of the object code. The development of techniques for automatic recognition of computational kernels such as inductions, reductions and array recurrences has been an intensive research area in the scope of compiler technology during the 90's. This article presents a new compiler framework that, unlike previous techniques that focus on specific and isolated kernels, recognizes a comprehensive collection of computational kernels that appear frequently in full-scale real applications. The XARK compiler operates on top of the Gated Single Assignment (GSA) form of a high-level intermediate representation (IR) of the source code. Recognition is carried out through a demand-driven analysis of this high-level IR at two different levels. First, the dependences between the statements that compose the strongly connected components (SCCs) of the data-dependence graph of the GSA form are analyzed. As a result of this intra-SCC analysis, the computational kernels corresponding to the execution of the statements of the SCCs are recognized. Second, the dependences between statements of different SCCs are examined in order to recognize more complex kernels that result from combining simpler kernels in the same code. Overall, the XARK compiler builds a hierarchical representation of the source code as kernels and dependence relationships between those kernels. This article describes in detail the collection of computational kernels recognized by the XARK compiler. Besides, the internals of the recognition algorithms are presented. The design of the algorithms enables to extend the recognition capabilities of XARK to cope with new kernels, and provides an advanced symbolic analysis framework to run other compiler techniques on demand. Finally, extensive experiments showing the effectiveness of XARK for a collection of benchmarks from different application domains are presented. In particular, the SparsKit-II library for the manipulation of sparse matrices, the Perfect benchmarks, the SPEC CPU2000 collection and the PLTMG package for solving elliptic partial differential equations are analyzed in detail.Ministeiro de EducaciĂłn y Ciencia; TIN2004-07797-C02Ministeiro de EducaciĂłn y Ciencia; TIN2007-67537-C03Xunta de Galicia; PGIDIT05PXIC10504PNXunta de Galicia; PGIDIT06PXIB105228P

    Rewriting System for Profile-Guided Data Layout Transformations on Binaries

    Get PDF
    International audienceCareful data layout design is crucial for achieving high performance. However exploring data layouts is time-consuming and error-prone, and assessing the impact of a layout transformation on performance is difficult without performing it. We propose to guide application programmers through data layout restructuring by providing a comprehensive multidimensional description of the initial layout, built from trace analysis, and then by giving a performance evaluation of the transformations tested and an expression of each transformed layout. The programmer can limit the exploration to layouts matching some patterns. We apply this method to two multithreaded applications. The performance prediction of multiple transformations matches within 5% the performance of hand-transformed layout code

    Hybrid eager and lazy evaluation for efficient compilation of Haskell

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 208-220).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.The advantage of a non-strict, purely functional language such as Haskell lies in its clean equational semantics. However, lazy implementations of Haskell fall short: they cannot express tail recursion gracefully without annotation. We describe resource-bounded hybrid evaluation, a mixture of strict and lazy evaluation, and its realization in Eager Haskell. From the programmer's perspective, Eager Haskell is simply another implementation of Haskell with the same clean equational semantics. Iteration can be expressed using tail recursion, without the need to resort to program annotations. Under hybrid evaluation, computations are ordinarily executed in program order just as in a strict functional language. When particular stack, heap, or time bounds are exceeded, suspensions are generated for all outstanding computations. These suspensions are re-started in a demand-driven fashion from the root. The Eager Haskell compiler translates Ac, the compiler's intermediate representation, to efficient C code. We use an equational semantics for Ac to develop simple correctness proofs for program transformations, and connect actions in the run-time system to steps in the hybrid evaluation strategy.(cont.) The focus of compilation is efficiency in the common case of straight-line execution; the handling of non-strictness and suspension are left to the run-time system. Several additional contributions have resulted from the implementation of hybrid evaluation. Eager Haskell is the first eager compiler to use a call stack. Our generational garbage collector uses this stack as an additional predictor of object lifetime. Objects above a stack watermark are assumed to be likely to die; we avoid promoting them. Those below are likely to remain untouched and therefore are good candidates for promotion. To avoid eagerly evaluating error checks, they are compiled into special bottom thunks, which are treated specially by the run-time system. The compiler identifies error handling code using a mixture of strictness and type information. This information is also used to avoid inlining error handlers, and to enable aggressive program transformation in the presence of error handling.by Jan-Willem Maessen.Ph.D

    The Attributed Pi Calculus with Priorities

    Get PDF
    International audienceWe present the attributed π\pi-calculus for modeling concurrent systems with interaction constraints depending on the values of attributes of processes. The π\pi-calculus serves as a constraint language underlying the π\pi-calculus. Interaction constraints subsume priorities, by which to express global aspects of populations. We present a nondeterministic and a stochastic semantics for the attributed π\pi-calculus. We show how to encode the π\pi-calculus with priorities and polyadic synchronization π\pi@ and thus dynamic compartments, as well as the stochastic π\pi-calculus with concurrent objects spico. We illustrate the usefulness of the attributed π\pi-calculus for modeling biological systems at two particular examples: Euglena’s spatial movement in phototaxis, and cooperative protein binding in gene regulation of bacteriophage lambda. Furthermore, population-based model is supported beside individual-based modeling. A stochastic simulation algorithm for the attributed π\pi-calculus is derived from its stochastic semantics. We have implemented a simulator and present experimental results, that confirm the practical relevance of our approach

    Analysing the Complexity of Functional Programs: Higher-Order Meets First-Order

    Get PDF
    International audienceWe show how the complexity of higher-order functional programs can be analysed automatically by applying program transformations to a defunctionalized versions of them, and feeding the result to existing tools for the complexity analysis of first-order term rewrite systems. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Further, we describe suitable strategies for the application of the studied transformations and provide ample experimental data for assessing the viability of our method

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020

    The Attributed Pi Calculus

    Get PDF
    International audienceThe attributed pi calculus (pi(L)) forms an extension of the pi calculus with attributed processes and attribute dependent synchronization. To ensure flexibility, the calculus is parametrized with the language L which defines possible values of attributes. pi(L) can express polyadic synchronization as in pi@ and thus diverse compartment organizations. A non-deterministic and a stochastic semantics, where rates may depend on attribute values, is introduced. The stochastic semantics is based on continuous time Markov chains. A simulation algorithm is developed which is firmly rooted in this stochastic semantics. Two examples, the movement processes in the phototaxis of Euglena and the cooperative binding in the gene regulation of the lambda Phage, underline the applicability of pi(L) to systems biology
    • …
    corecore