47 research outputs found

    Scope Graphs: The Story so Far

    Get PDF
    Static name binding (i.e., associating references with appropriate declarations) is an essential aspect of programming languages. However, it is usually treated in an unprincipled manner, often leaving a gap between formalization and implementation. The scope graph formalism mitigates these deficiencies by providing a well-defined, first-class, language-parametric representation of name binding. Scope graphs serve as a foundation for deriving type checkers from declarative type system specifications, reasoning about type soundness, and implementing editor services and refactorings. In this paper we present an overview of scope graphs, and, using examples, show how the ideas and notation of the formalism have evolved. We also briefly discuss follow-up research beyond type checking, and evaluate the formalism

    Algorithmic Resource Verification

    Get PDF
    Static estimation of resource utilisation of programs is a challenging and important problem with numerous applications. In this thesis, I present new algorithms that enable users to specify and verify their desired bounds on resource usage of functional programs. The resources considered are algorithmic resources such as the number of steps needed to evaluate a program (steps) and the number of objects allocated in the memory (alloc). These resources are agnostic to the runtimes on which the programs are executed yet provide a concrete estimate of the resource usage of an implementation. Our system is designed to handle sophisticated functional programs that use recursive functions, datatypes, closures, memoization and lazy evaluation. In our approach, users can specify in the contracts of functions an upper bound they expect to hold on the resource usages of the functions. The upper bounds can be expressed as templates with numerical holes. For example, a bound steps †?*size(inp)+? denotes that the number of evaluation steps is linear in the size of the input. The templates can be seamlessly combined with correctness invariants or preconditions necessary for establishing the bounds. Furthermore, the resource templates and invariants are allowed to use recursive and first-class functions as well as other features of the language. Our approach for verifying such resource templates operates in two phases. It first reduces the problem of resource inference to invariant inference by synthesizing an instrumented first-order program that accurately models the resource usage of the program components, the higher-order control flow and the effects of memoization, using algebraic datatypes, sets and mutual recursion. The approach solves the synthesized first-order program by producing verification conditions of the form exists-forall using a modular assume/guarantee reasoning. The verification conditions are solved using a novel counterexample-driven algorithm capable of discovering strongest resource bounds belonging to the given template. I present the results of using our system to verify upper bounds on the usage of algorithmic resources that correspond to sequential and parallel execution times, as well as heap and stack memory usage. The system was evaluated on several benchmarks that include advanced functional data structures and algorithms such as balanced trees, meldable heaps, Okasakiâs lazy data structures, dynamic programming algorithms, and also compiler phases like optimizers and parsers. The evaluations show that the system is able to infer hard, nonlinear resource bounds that are beyond the capability of the existing approaches. Furthermore, the evaluations presented in this dissertation show that, when averaged over many benchmarks, the resource consumption measured at runtime is 80% of the value inferred by the system statically when estimating the number of evaluation steps and is 88% when estimating the number of heap allocations

    Profiling optimised Haskell - causal analysis and implementation

    Get PDF
    At the present time, performance optimisation of real-life Haskell programs is a bit of a “black art”. Programmers that can do so reliably are highly esteemed, doubly so if they manage to do it without sacrificing the character of the language by falling back to an “imperative style”. The reason is that while programming at a high-level does not need to result in slow performance, it must rely on a delicate mix of optimisations and transformations to work out just right. Predicting how all these cogs will turn is hard enough – but where something goes wrong, the various transformations will have mangled the program to the point where even finding the crucial locations in the code can become a game of cat-and-mouse. In this work we will lift the veil on the performance of heavily transformed Haskell programs: Using a formal causality analysis we will track source code links from square one, and maintain the connection all the way to the final costs generated by the program. This will allow us to implement a profiling solution that can measure performance at high accuracy while explaining in detail how we got to the point in question. Furthermore, we will directly support the performance analysis process by developing an interactive profiling user interface that allows rapid theory forming and evaluation, as well as deep analysis where required

    A Distributed Stream Library for Java 8

    Get PDF
    An increasingly popular application of parallel computing is Big Data, which concerns the storage and analysis of very large datasets. Many of the prominent Big Data frameworks are written in Java or JVM-based languages. However, as a base language for Big Data systems, Java still lacks a number of important capabilities such as processing very large datasets and distributing the computation over multiple machines. The introduction of Streams in Java 8 has provided a useful programming model for data-parallel computing, but it is limited to a single JVM and still does not address Big Data issues. This thesis contends that though the Java 8 Stream framework is inadequate to support the development of Big Data applications, it is possible to extend the framework to achieve performance comparable to or exceeding those of popular Big Data frameworks. It first reviews a number of Big Data programming models and gives an overview of the Java 8 Stream API. It then proposes a set of extensions to allow Java 8 Streams to be used in Big Data systems. It also shows how the extended API can be used to implement a range of standard Big Data paradigms. Finally, it compares the performance of such programs with that of Hadoop and Spark. Despite being a proof-of-concept implementation, experimental results indicate that it is a lightweight and efficient framework, comparable in performance to Hadoop and Spark

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto
    corecore