31 research outputs found

    A Verified Packrat Parser Interpreter for Parsing Expression Grammars

    Full text link
    Parsing expression grammars (PEGs) offer a natural opportunity for building verified parser interpreters based on higher-order parsing combinators. PEGs are expressive, unambiguous, and efficient to parse in a top-down recursive descent style. We use the rich type system of the PVS specification language and verification system to formalize the metatheory of PEGs and define a reference implementation of a recursive parser interpreter for PEGs. In order to ensure termination of parsing, we define a notion of a well-formed grammar. Rather than relying on an inductive definition of parsing, we use abstract syntax trees that represent the computational trace of the parser to provide an effective proof certificate for correct parsing and ensure that parsing properties including soundness and completeness are maintained. The correctness properties are embedded in the types of the operations so that the proofs can be easily constructed from local proof obligations. Building on the reference parser interpreter, we define a packrat parser interpreter as well as an extension that is capable of semantic interpretation. Both these parser interpreters are proved equivalent to the reference one. All of the parsers are executable. The proofs are formalized in mathematical terms so that similar parser interpreters can be defined in any specification language with a type system similar to PVS.Comment: 15 pages, 15 figures, Certified Proofs and Program

    Evidence gathering in support of sustainable Scottish inshore fisheries: work package (4) final report: a pilot study to define the footprint and activities of Scottish inshore fisheries by identifying target fisheries, habitats and associated fish stocks

    Get PDF
    [Extract from Executive Summary] This work was conducted under Work package 4 of the European Fisheries Funded program “Evidence Gathering in Support of Sustainable Scottish Inshore Fisheries”. The overall aim of the program was to work in partnership with Marine Scotland Fisheries Policy and with the Scottish Inshore Fisheries Groups to help develop inshore fisheries management. Specifically the program aims were to establish the location of fishing activities within inshore areas; to identify catch composition and associated fishery impacts; to define the environmental footprint and availability of stocks; to develop economic value within local fisheries and; to establish an information resource base to assist the development of inshore fisheries management provisions.Publisher PD

    Extracting total Amb programs from proofs

    Get PDF
    We present a logical system CFP (Concurrent Fixed Point Logic) supporting the extraction of nondeterministic and concurrent programs that are provably total and correct. CFP is an intuitionistic first-order logic with inductive and coinductive definitions extended by two propositional operators: Rrestriction, a strengthening of implication, and an operator for total concurrency. The source of the extraction are formal CFP proofs, the target is a lambda calculus with constructors and recursion extended by a constructor Amb (for McCarthy's amb) which is interpreted operationally as globally angelic choice and is used to implement nondeterminism and concurrency. The correctness of extracted programs is proven via an intermediate domain-theoretic denotational semantics. We demonstrate the usefulness of our system by extracting a nondeterministic program that translates infinite Gray code into the signed digit representation. A noteworthy feature of CFP is the fact that the proof rules for restriction and concurrency involve variants of the classical law of excluded middle that would not be interpretable computationally without Amb.Comment: 39 pages + 4 pages appendix. arXiv admin note: text overlap with arXiv:2104.1466

    Investigation, modelling and planning of stochastic concrete placing operations

    Get PDF

    Projection-Based Program Analysis

    Get PDF
    Projection-based program analysis techniques are remarkable for their ability to give highly detailed and useful information not obtainable by other methods. The first proposed projection-based analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for binding-time analysis; both techniques are restricted to analysis of first-order monomorphic languages. Hughes and Launchbury generalised the strictness analysis technique, and Launchbury the binding-time analysis technique, to handle polymorphic languages, again restricted to first order. Other than a general approach to higher-order analysis suggested by Hughes, and an ad hoc implementation of higher-order binding-time analysis by Mogensen, neither of which had any formal notion of correctness, there has been no successful generalisation to higher-order analysis. We present a complete redevelopment of monomorphic projection-based program analysis from first principles, starting by considering the analysis of functions (rather than programs) to establish bounds on the intrinsic power of projection-based analysis, showing also that projection-based analysis can capture interesting termination properties. The development of program analysis proceeds in two distinct steps: first for first-order, then higher order. Throughout we maintain a rigorous notion of correctness and prove that our techniques satisfy their correctness conditions. Our higher-order strictness analysis technique is able to capture various so-called data-structure-strictness properties such as head strictness-the fact that a function may be safely assumed to evaluate the head of every cons cell in a list for which it evaluates the cons cell. Our technique, and Hunt's PER-based technique (originally proposed at about the same time as ours), are the first techniques of any kind to capture such properties at higher order. Both the first-order and higher-order techniques are the first projection-based techniques to capture joint strictness properties-for example, the fact that a function may be safely assumed to evaluate at least one of several arguments. The first-order binding-time analysis technique is essentially the same as Launchbury's; the higher-order technique is the first such formally-based higher-order generalisation. Ours are the first projection-based termination analysis techniques, and are the first techniques of any kind that are able to detect termination properties such as head termination-the fact that termination of a cons cell implies termination of the head. A notable feature of the development is the method by which the first-order analysis semantics are generalised to higher-order: except for the fixed-point constant the higher-order semantics are all instances of a higher-order semantics parameterised by the constants defining the various first-order semantics

    A parallel functional language compiler for message-passing multicomputers

    Get PDF
    The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects

    Parallel programming using functional languages

    Get PDF
    It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators. The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation. In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism. One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead. It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples

    The teaching of English in Scottish secondary schools 1940-1990 : a study of change and development

    Get PDF
    This study follows the progress of a key school subject towards its slow, partial fulfilment of the 1940s' aspiration for equality of educational opportunity within the post war reconstruction of Scottish society. Its focus is on 'English' at both the level of public pronouncement and of day-to-day classroom experience and on the intricate interactions between these two worlds. Therefore, in addition to analysis of official documentation and school materials, the personal testimony of twenty long-serving participants, practitioners as well as policy makers, is woven into the account. Two factors have helped to elucidate this history: the centralised, uniform nature of the Scottish system; the post-war inheritance of two articulated but competing models of English - the initially dominant Scottish Education Department supported academic syllabus built on knowledge inculcation, national examination and institutional division into 'junior' and 'senior' secondary curricula as against the progressivist alternative of 'the full and harmonious development of the individual' to be sought in 'omnibus' schools. Superficially, 1940-1990 may be viewed as the gradual, orderly movement towards Standard Grade English as a consensual acceptance of the progressivist version, a process facilitated by an opening up of decision-making into a partnership between SED and the profession through such bodies as the Consultative Council and a devolved Examination Board. A detailed investigation of actual practice shows a more ambiguous curricular reality in which pragmatic management and deeply embedded assumptions sustain a contradictory adherence to didactic methodology and rigid assessment procedure. The Scottish experience suggests that curricular change is a necessarily problematic process whose promotion depends upon a sensitive appreciation of its complex rhythms. In Scotland this means using the traditional authority of the centre to establish clear frameworks and appropriate assessment targets within and against which the individual teacher is freed to work out a matching pedagogy and to take control of in-course evaluations. Above all, the educational innovator must be alert to the power of historical inheritance in the construction of classroom practice

    Prototyping parallel functional intermediate languages

    Get PDF
    Non-strict higher-order functional programming languages are elegant, concise, mathematically sound and contain few environment-specific features, making them obvious candidates for harnessing high-performance architectures. The validity of this approach has been established by a number of experimental compilers. However, while there have been a number of important theoretical developments in the field of parallel functional programming, implementations have been slow to materialise. The myriad design choices and demands of specific architectures lead to protracted development times. Furthermore, the resulting systems tend to be monolithic entities, and are difficult to extend and test, ultimatly discouraging experimentation. The traditional solution to this problem is the use of a rapid prototyping framework. However, as each existing systems tends to prefer one specific platform and a particular way of expressing parallelism (including implicit specification) it is difficult to envisage a general purpose framework. Fortunately, most of these systems have at least one point of commonality: the use of an intermediate form. Typically, these abstract representations explicitly identify all parallel components but without the background noise of syntactic and (potentially arbitrary) implementation details. To this end, this thesis outlines a framework for rapidly prototyping such intermediate languages. Based on the traditional three-phase compiler model, the design process is driven by the development of various semantic descriptions of the language. Executable versions of the specifications help to both debug and informally validate these models. A number of case studies, covering the spectrum of modern implementations, demonstrate the utility of the framework

    The development of the audience for early film in Glasgow before 1914

    Get PDF
    This thesis investigates the development of the audience for early cinema in Glasgow. It takes a social-historical approach considering the established scholarship from Allen, Low, Hansen, Kuhn et al, on the development of early cinema audiences, and overlays this with original archival research to provide examples which are specific to Glasgow. By using a variety of local sources, for example, newspaper archives, local authority minutes, and audience recollections, this thesis will show that although there were commonalities between Glasgow and various other centres such as Aberdeen, Manhattan, Knoxville, or London, there were also conditions which were specific and unique to Glasgow. In that sense this thesis is a local insight into a national argument. A secondary aim of this work is the relationship between Glasgow audiences and the moving image. This thesis will examine the sense of civic pride which cinema brought to the city, as well as considering the rise of a new generation of fans. These ‘cinema natives’ as I term them grew up with the moving image and as such had an enduring connection with the movies. The thesis is broadly organised in three parts. The first part (Chapters 1-3) charts the history of entertainment in Glasgow as a city of spectacle and display. The Glasgow Fair holiday played an important part in establishing leisure in the city’s social calendar and led to the setting up of an entertainment quarter. This section will also consider pre-cinema traditions and the growth of Victorian leisure culture. It will also show how cinema in Glasgow spread very quickly so that within 18 months it encompassed a wide demographic range. The mid-section (Chapters 4-6) outlines the regulatory framework in which cinema emerged in Glasgow. Once fixed-site exhibition developed from 1908 there was a moral backlash against cinema with a campaign from Glasgow Parish Council aimed at restricting access for children. At the same time there were also national moves to introduce safety regulation. This section examines how the themes of safety regulation and moral regulation were conflated in an attempt to control the audience. It concludes with the introduction of the Cinematograph Act 1909 and outlines the difficulties of applying this national legislation at a local level
    corecore