36,454 research outputs found

    Linear-algebraic lambda-calculus

    Full text link
    With a view towards models of quantum computation and/or the interpretation of linear logic, we define a functional language where all functions are linear operators by construction. A small step operational semantic (and hence an interpreter/simulator) is provided for this language in the form of a term rewrite system. The linear-algebraic lambda-calculus hereby constructed is linear in a different (yet related) sense to that, say, of the linear lambda-calculus. These various notions of linearity are discussed in the context of quantum programming languages. KEYWORDS: quantum lambda-calculus, linear lambda-calculus, Ī»\lambda-calculus, quantum logics.Comment: LaTeX, 23 pages, 10 figures and the LINEAL language interpreter/simulator file (see "other formats"). See the more recent arXiv:quant-ph/061219

    Transparency in Complex Computational Systems

    Get PDF
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s..

    Mechanistic Behavior of Single-Pass Instruction Sequences

    Get PDF
    Earlier work on program and thread algebra detailed the functional, observable behavior of programs under execution. In this article we add the modeling of unobservable, mechanistic processing, in particular processing due to jump instructions. We model mechanistic processing preceding some further behavior as a delay of that behavior; we borrow a unary delay operator from discrete time process algebra. We define a mechanistic improvement ordering on threads and observe that some threads do not have an optimal implementation.Comment: 12 page

    Offline Specialisation in Prolog Using a Hand-Written Compiler Generator

    No full text
    The so called "cogen approach" to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that the "cogen" approach is also applicable to the specialisation of logic programs (called partial deduction when applied to pure logic programs) and leads to effective specialisers. Moreover, using good binding-time annotations, the speed-ups of the specialised programs are comparable to the speed-ups obtained with online specialisers. The paper first develops a generic approach to offline partial deduction and then a specific offline partial deduction method, leading to the offline system LIX for pure logic programs. While this is a usable specialiser by itself, its specialisation strategy is used to develop the "cogen" system LOGEN. Given a program, a specification of what inputs will be static, and an annotation specifying which calls should be unfolded, LOGEN generates a specialised specialiser for the program at hand. Running this specialiser with particular values for the static inputs results in the specialised program. While this requires two steps instead of one, the efficiency of the specialisation process is improved in situations where the same program is specialised multiple times. The paper also presents and evaluates an automatic binding-time analysis that is able to derive the annotations. While the derived annotations are still suboptimal compared to hand-crafted ones, they enable non-expert users to use the LOGEN system in a fully automated way Finally, LOGEN is extended so as to directly support a large part of Prolog's declarative and non-declarative features and so as to be able to perform so called mixline specialisations. In mixline specialisation some unfolding decisions depend on the outcome of tests performed at specialisation time instead of being hardwired into the specialiser

    Inductive benchmarking for purely functional data structures

    Get PDF
    Every designer of a new data structure wants to know how well it performs in comparison with others. But finding, coding and testing applications as benchmarks can be tedious and time-consuming. Besides, how a benchmark uses a data structure may considerably affect its apparent efficiency, so the choice of applications may bias the results. We address these problems by developing a tool for inductive benchmarking. This tool, Auburn, can generate benchmarks across a wide distribution of uses. We precisely define 'the use of a data structure', upon which we build the core algorithms of Auburn: how to generate a benchmark from a description of use, and how to extract a description of use from an application. We then apply inductive classification techniques to obtain decision trees for the choice between competing data structures. We test Auburn by benchmarking several implementations of three common data structures: queues, random-access lists and heaps. These and other results show Auburn to be a useful and accurate tool, but they also reveal some limitations of the approach
    • ā€¦
    corecore