4 research outputs found

    N queens on an fpga: mathematics,programming, or both?

    Get PDF
    This paper presents a design methodology for deriving an FPGA implementation directly from a mathematical specification, thus avoiding the switch in semantic perspective as is present in widely applied methods which include an imperative implementation as an intermediate step. The first step in the method presented in this paper is to transform a mathematical specification into a Haskell program. The next step is to make repetition structures explicit by higher order functions, and after that rewrite the specification in the form of a Mealy Machine. Finally, adaptations have to be made in order to comply to the fixed nature of hardware. The result is then given to CaSH, a compiler which generates synthesizable VHDL from the resulting Haskell code. An advantage of the approach presented here is that in all phases of the process the design can be directly simulated by executing the defining code in a standard Haskell environment. To illustrate the design process, the N queens problem is chosen as a running example

    Annotated Type Systems for Program Analysis

    Get PDF
    In this Ph.D. thesis, we study four program analyses. Three of them are specified by annotated type systems and the last one by abstract interpretation.We present a combined strictness and totality analysis. We are specifying the analysis as an annotated type system. The type system allows conjunctions of annotated types, but only at the top-level. The analysis is somewhat more powerful than the strictness analysis by Kuo and Mishra due to the conjunctions and in that we also consider totality. The analysis is shown sound with respect to a natural-style operational semantics. The analysis is not immediately extendable to full conjunction.The second analysis is also a combined strictness and totality analysis, however with ``full´´ conjunction. Soundness of the analysis is shown with respect to a denotational semantics. The analysis is more powerful than the strictness analyses by Jensen and Benton in that it in addition to strictness considers totality. So far we have only specified the analyses, however in order for the analyses to be practically useful we need an algorithm for inferring the annotated types. We construct an algorithm for the second analysis using the lazy type approach by Hankin and Le Métayer. The reason for choosing the second analysis from the thesis is that the approach is not applicable to the first analysis.The third analysis we study is a binding time analysis. We take the analysis specified by Nielson and Nielson and we construct a more efficient algorithm than the one proposed by Nielson and Nielson. The algorithm collects constraints in a structural manner like the type inference algorithm by Damas. Afterwards the minimal solution to the set of constraints is found.The last analysis in the thesis is specified by abstract interpretation. Hunt shows that projection based analyses are subsumed by PER (partial equivalence relation) based analyses using abstract interpretation. The PERs used by Hunt are strict, i.e. bottom is related to bottom. Here we lift this restriction by requiring the PERs to be uniform, in the sense that they treat all the integers equally. By allowing non-strict PERs we get three properties on the integers, corresponding to the three annotations used in the first and second analysis in the thesis

    Projection-Based Program Analysis

    Get PDF
    Projection-based program analysis techniques are remarkable for their ability to give highly detailed and useful information not obtainable by other methods. The first proposed projection-based analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for binding-time analysis; both techniques are restricted to analysis of first-order monomorphic languages. Hughes and Launchbury generalised the strictness analysis technique, and Launchbury the binding-time analysis technique, to handle polymorphic languages, again restricted to first order. Other than a general approach to higher-order analysis suggested by Hughes, and an ad hoc implementation of higher-order binding-time analysis by Mogensen, neither of which had any formal notion of correctness, there has been no successful generalisation to higher-order analysis. We present a complete redevelopment of monomorphic projection-based program analysis from first principles, starting by considering the analysis of functions (rather than programs) to establish bounds on the intrinsic power of projection-based analysis, showing also that projection-based analysis can capture interesting termination properties. The development of program analysis proceeds in two distinct steps: first for first-order, then higher order. Throughout we maintain a rigorous notion of correctness and prove that our techniques satisfy their correctness conditions. Our higher-order strictness analysis technique is able to capture various so-called data-structure-strictness properties such as head strictness-the fact that a function may be safely assumed to evaluate the head of every cons cell in a list for which it evaluates the cons cell. Our technique, and Hunt's PER-based technique (originally proposed at about the same time as ours), are the first techniques of any kind to capture such properties at higher order. Both the first-order and higher-order techniques are the first projection-based techniques to capture joint strictness properties-for example, the fact that a function may be safely assumed to evaluate at least one of several arguments. The first-order binding-time analysis technique is essentially the same as Launchbury's; the higher-order technique is the first such formally-based higher-order generalisation. Ours are the first projection-based termination analysis techniques, and are the first techniques of any kind that are able to detect termination properties such as head termination-the fact that termination of a cons cell implies termination of the head. A notable feature of the development is the method by which the first-order analysis semantics are generalised to higher-order: except for the fixed-point constant the higher-order semantics are all instances of a higher-order semantics parameterised by the constants defining the various first-order semantics

    Transformations on higher-order functions

    No full text
    corecore