351 research outputs found

    Use of proofs-as-programs to build an anology-based functional program editor

    Get PDF
    This thesis presents a novel application of the technique known as proofs-as-programs. Proofs-as-programs defines a correspondence between proofs in a constructive logic and functional programs. By using this correspondence, a functional program may be represented directly as the proof of a specification and so the program may be analysed within this proof framework. CʸNTHIA is a program editor for the functional language ML which uses proofs-as-programs to analyse users' programs as they are written. So that the user requires no knowledge of proof theory, the underlying proof representation is completely hidden. The proof framework allows programs written in CʸNTHIA to be checked to be syntactically correct, well-typed, well-defined and terminating. CʸNTHIA also embodies the idea of programming by analogy — rather than starting from scratch, users always begin with an existing function definition. They then apply a sequence of high-level editing commands which transform this starting definition into the one required. These commands preserve correctness and also increase programming efficiency by automating commonly occurring steps. The design and implementation of CʸNTHIA is described and its role as a novice programming environment is investigated. Use by experts is possible but only a sub-set of ML is currently supported. Two major trials of CʸNTHIA have shown that CʸNTHIA is well-suited as a teaching tool. Users of CʸNTHIA make fewer programming errors and the feedback facilities of CʸNTHIA mean that it is easier to track down the source of errors when they do occur

    A Linear Logic approach to RESTful web service modelling and composition

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyRESTful Web Services are gaining increasing attention from both the service and the Web communities. The rising number of services being implemented and made available on the Web is creating a demand for modelling techniques that can abstract REST design from the implementation in order better to specify, analyse and implement large-scale RESTful Web systems. It can also help by providing suitable RESTful Web Service composition methods which can reduce costs by effi ciently re-using the large number of services that are already available and by exploiting existing services for complex business purposes. This research considers RESTful Web Services as state transition systems and proposes a novel Linear Logic based approach, the first of its kind, for both the modelling and the composition of RESTful Web Services. The thesis demonstrates the capabilities of resource-sensitive Linear Logic for modelling five key REST constraints and proposes a two-stage approach to service composition involving Linear Logic theorem proving and proof-as-process based on the π-calculus. Whereas previous approaches have focused on each aspect of the composition of RESTful Web Services individually (e.g. execution or high-level modelling), this work bridges the gap between abstract formal modelling and application-level execution in an efficient and effective way. The approach not only ensures the completeness and correctness of the resulting composed services but also produces their process models naturally, providing the possibility to translate them into executable business languages. Furthermore, the research encodes the proposed modelling and composition method into the Coq proof assistant, which enables both the Linear Logic theorem proving and the π-calculus extraction to be conducted semi-automatically. The feasibility and versatility studies performed in two disparate user scenarios (shopping and biomedical service composition) show that the proposed method provides a good level of scalability when the numbers of services and resources grow

    Reasoning about correctness properties of a coordination programming language

    Get PDF
    Safety critical systems place additional requirements to the programming language used to implement them with respect to traditional environments. Examples of features that in uence the suitability of a programming language in such environments include complexity of de nitions, expressive power, bounded space and time and veri ability. Hume is a novel programming language with a design which targets the rst three of these, in some ways, contradictory features: fully expressive languages cannot guarantee bounds on time and space, and low-level languages which can guarantee space and time bounds are often complex and thus error-phrone. In Hume, this contradiction is solved by a two layered architecture: a high-level fully expressive language, is built on top of a low-level coordination language which can guarantee space and time bounds. This thesis explores the veri cation of Hume programs. It targets safety properties, which are the most important type of correctness properties, of the low-level coordination language, which is believed to be the most error-prone. Deductive veri cation in Lamport's temporal logic of actions (TLA) is utilised, in turn validated through algorithmic experiments. This deductive veri cation is mechanised by rst embedding TLA in the Isabelle theorem prover, and then embedding Hume on top of this. Veri cation of temporal invariants is explored in this setting. In Hume, program transformation is a key feature, often required to guarantee space and time bounds of high-level constructs. Veri cation of transformations is thus an integral part of this thesis. The work with both invariant veri cation, and in particular, transformation veri cation, has pinpointed several weaknesses of the Hume language. Motivated and in uenced by this, an extension to Hume, called Hierarchical Hume, is developed and embedded in TLA. Several case studies of transformation and invariant veri cation of Hierarchical Hume in Isabelle are conducted, and an approach towards a calculus for transformations is examined.James Watt ScholarshipEngineering and Physical Sciences Research Council (EPSRC) Platform grant GR/SO177

    An improved method for the mechanisation of inductive proof

    Get PDF

    Verification of Pointer-Based Programs with Partial Information

    Get PDF
    The proliferation of software across all aspects of people's life means that software failure can bring catastrophic result. It is therefore highly desirable to be able to develop software that is verified to meet its expected specification. This has also been identified as a key objective in one of the UK Grand Challenges (GC6) (Jones et al., 2006; Woodcock, 2006). However, many difficult problems still remain in achieving this objective, partially due to the wide use of (recursive) shared mutable data structures which are hard to keep track of statically in a precise and concise way. This thesis aims at building a verification system for both memory safety and functional correctness of programs manipulating pointer-based data structures, which can deal with two scenarios where only partial information about the program is available. For instance the verifier may be supplied with only partial program specification, or with full specification but only part of the program code. For the first scenario, previous state-of-the-art works (Nguyen et al., 2007; Chin et al., 2007; Nguyen and Chin, 2008; Chin et al, 2010) generally require users to provide full specifications for each method of the program to be verified. Their approach seeks much intellectual effort from users, and meanwhile users are liable to make mistakes in writing such specifications. This thesis proposes a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multiset properties that could be later confirmed or revised by users. Meanwhile, we further augment our approach by requiring only partial specification to be given for primary methods of a program. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. This work is aimed at verifying beyond shape properties, with the eventual goal of analysing both memory safety and functional properties for pointer-based data structures. Initial experiments have confirmed that we can automatically refine partial specifications with non-trivial constraints, thus making it easier for users to handle specifications with richer properties. For the second scenario, many programs contain invocations to unknown components and hence only part of the program code is available to the verifier. As previous works generally require the whole of program code be present, we target at the verification of memory safety and functional correctness of programs manipulating pointer-based data structures, where the program code is only partially available due to invocations to unknown components. Provided with a Hoare-style specification ({Pre} prog {Post}) where program (prog) contains calls to some unknown procedure (unknown), we infer a specification (mspecu) for the unknown part (unknown) from the calling contexts, such that the problem of verifying program (prog) can be safely reduced to the problem of proving that the unknown procedure (unknown) (once its code is available) meets the derived specification (mspecu). The expected specification (mspecu) is automatically calculated using an abduction-based shape analysis specifically designed for a combined abstract domain. We have implemented a system to validate the viability of our approach, with encouraging experimental results

    Program Analysis in A Combined Abstract Domain

    Get PDF
    Automated verification of heap-manipulating programs is a challenging task due to the complexity of aliasing and mutability of data structures used in these programs. The properties of a number of important data structures do not only relate to one domain, but to combined multiple domains, such as sorted list, priority queues, height-balanced trees and so on. The safety and sometimes efficiency of programs do rely on the properties of those data structures. This thesis focuses on developing a verification system for both functional correctness and memory safety of such programs which involve heap-based data structures. Two automated inference mechanisms are presented for heap-manipulating programs in this thesis. Firstly, an abstract interpretation based approach is proposed to synthesise program invariants in a combined pure and shape domain. Newly designed abstraction, join and widening operators have been defined for the combined domain. Furthermore, a compositional analysis approach is described to discover both pre-/post-conditions of programs with a bi-abduction technique in the combined domain. As results of my thesis, both inference approaches have been implemented and the obtained results validate the feasibility and precision of proposed approaches. The outcomes of the thesis confirm that it is possible and practical to analyse heap-manipulating programs automatically and precisely by using abstract interpretation in a sophisticated combined domain

    Deductive synthesis of recursive plans in linear logic

    Get PDF
    Centre for Intelligent Systems and their ApplicationsConventionally, the problem of plan formation in Artificial Intelligence deals with the generation of plans in the form of a sequence of actions. This thesis describes an approach to extending the expressiveness of plans to include conditional branches and recursion. This allows problems to be solved at a higher level, such that a single plan in such a language is capable of solving a class of problems rather than a single problem instance. A plan of fixed size may solve arbitrarily large problem instances. To form such plans, we take a deductive planning approach, in which the formation of the plan goes hand-in-hand with the construction of the proof that the plan specification is realisable. The formalism used here for specifying and reasoning with planning problems is Girard's Institutionistic Linear Logic (ILL), which is attractive for planning problems because state change can be expressed directly as linear implication, with no need for frame axioms. We extract plans by means of the relationship between proofs in ILL and programs in the style of Abramsky. We extend the ILL proof rules to account for induction over inductively defined types, thereby allowing recursive plans to be synthesised. We also adapt Abramsky's framework to partially evaluate and execute the plans in the extended language. We give a proof search algorithm tailored towards the fragment of the ILL employed (excluding induction rule selection). A system implementation, Lino, comprises modules for proof checking, automated proof search, plan extraction and partial evaluation of plans. We demonstrate the encodings and solutions in our framework of various planning domains involving recursion. We compare the capabilities of our approach with the previous approaches of Manna and Waldinger, Ghassem-Sani and Steel, and Stephen and Biundo. We claim that our approach gives a good balance between coverage of problems that can be described and the tractability of proof search

    Verification and Validation of JavaScript

    Get PDF
    JavaScript is a prototype-based, dynamically typed language with scope chains and higher-order functions. Third party web applications embedded in web pages rely on JavaScript to run inside every browser. Because of its dynamic nature, a JavaScript program is easily exploited by malicious manipulations and safety breach attacks. Therefore, it is highly desirable when developing a JavaScript application to be able to verify that it meets its expected specification and that it is safe. One of the challenges in achieving this objective is that it is hard to statically keep track of the heap-manipulating JavaScript program due to the mutability of data structures. This thesis focuses on developing a verification framework for both functional correctness and safety of JavaScript programs that involve heap-based data structures. Two automated inference-based verification frameworks are constructed based upon a variant of separation logic. The first framework defines a suitable subset of JavaScript, together with a set of operational semantics rules, a specification language and a set of inference rules. Furthermore, an axiomatic framework is presented to discover both pre/post-conditions of a JavaScript program. Hoare-style specification {Pre}prog{Post}, where program prog contains the language statements. The problem of verifying program can be reduced to the problem of proving that the execution of the statements meets the derived specification language. The second framework increases the expressiveness of the subset language to include this that can cause safety issues in JavaScript programs. It revises the operational rules and inference rules to manipulate the newly added feature. Furthermore, a safety verification algorithm is defined. Both verification frameworks have been proved sound, and the results ob- tained from evaluations validate the feasibility and precision of proposed approaches. The outcomes of this thesis confirm that it is possible to anal- yse heap-manipulating JavaScript programs automatically and precisely to discover unsafe programs
    corecore