79,200 research outputs found

    A reification calculus for model-oriented software specification

    Get PDF
    This paper presents a transformational approach to the derivation of implementations from model-oriented specifications of abstract data types. The purpose of this research is to reduce the number of formal proofs required in model refinement, which hinder software development. It is shown to be appli- cable to the transformation of models written in Meta-iv (the specification lan- guage of Vdm) towards their refinement into, for example, Pascal or relational DBMSs. The approach includes the automatic synthesis of retrieve functions between models, and data-type invariants. The underlying algebraic semantics is the so-called final semantics ā€œ`a la Wandā€: a specification ā€œisā€ a model (heterogeneous algebra) which is the final ob ject (up to isomorphism) in the category of all its implementations. The transformational calculus approached in this paper follows from exploring the properties of finite, recursively defined sets. This work extends the well-known strategy of program transformation to model transformation, adding to previous work on a transformational style for operation- decomposition in META-IV. The model-calculus is also useful for improving model-oriented specifications.(undefined

    Abstraction Barriers and Refinement in the Polymorphic Lambda Calculus

    Get PDF
    This thesis examines specification refinement in the setting of polymorphic type theory and a complementary logic for relational parametricity. The starting point is the specification of abstract data types as done in the discipline of algebraic specification. Here, algebras are seen to match the standard notion of data type, i.e., a data representation together with operations on that data representation. An abstract data type is then a collection of data types sharing some well-defined abstract properties. In algebraic specification, these properties are specified algebraically by axioms in some suitable logic. Specification refinement then encompasses the idea that high-level specifications may be stepwise refined to executable programs that satisfy the initial specification; all in the framework of formal language and logic. This makes certain aspects of program development amenable to formal, computer-aided proofs of correctness. On the other hand, the discipline of type theory, lambda calculus, and its semantics is the prime field for research on programming languages. This framework is capable of characterising essentially any existing sequential programming-language feature, also advanced features such as recursive types, polymorphism and class-based object orientation. Furthermore, type theory provides a powerful framework for mechanised reasoning. This thesis is a contribution to lifting the idea of algebraic specification refinement into the more powerful domain of type theory and lambda calculus, thus giving the opportunity to expand in a sensible way a traditionally first order and functional framework to a wider range of programming aspects. We take a particular account of specification refinement and express it in a type-theoretic setting consisting of the polymorphic lambda calculus and a logic for relational parametricity. Key elements of algebraic specification are internalised in the syntax, e.g., data types viz. algebras are inhabitants of existential type, the latter providing essential data abstraction. For data types with only first-order operations, this setting automatically resolves certain issues of specification refinement, such as observational equivalence, stability and input sorts. After establishing a correspondence at first order, thus implanting the idea of algebraic specification refinement into the type-theoretic setting, the scene is set for lifting the idea of algebraic specification refinement to any number of programming features. In this thesis we focus on the generalisations to higher-order functions and to polymorphism. A simulation relation between two data types is a relation between their data representations that is preserved by their respective sets of operations. Using simulation relations is a classical way of explaining data refinement and observational equivalence. This combines with specification refinement to form specification refinement up to observational equivalence. With higher-order operations, however, we encounter in the logic a phenomenon related to what happens on the semantic level, i.e., the standard notion of refinement relation in the form of logical relations does not compose and the correspondence with observational equivalence is lost. In the logic it turns out that the standard notion of simulation relation fails to take into account a certain aspect of the abstraction barrier provided by existential types. We remedy this by proposing an alternative notion of simulation relation that observes this abstraction barrier more closely. We do this in two related ways; one relates to syntactic models while the other relates to a non-syntactic PER-model more apt for interpretive investigations. In algebraic specification, there is a universal proof method for specification refinement up to observational equivalence. This method can be imported soundly into the type-theoretic setting by asserting certain axioms. At first order, showing soundness for these axioms is straight-forward w.r.t. the standard parametric PER model for the logic. At higher order there are two problems. First, these axioms seemingly do not hold in the standard model. Secondly, the axioms speak in terms of simulation relations. At higher order, it is pertinent to have versions of the axioms featuring the abstraction barrier-observing simulation relations above, and to prove soundness for these poses an additional challenge. We show that the pure higher-order aspect of this problem can be solved by giving a setoid-based semantics. For the remaining task, we continue working from the observation that standard definitions do not observe abstraction barriers closely enough. Hence, we propose an alternative interpretation into the PER-model for data types that captures the abstraction barrier provided by existential types. The main contribution of this thesis is thus in generalising a prominent account of specification refinement to higher order and polymorphism via type theory incorporating relational parametricity. We also shed light on short-comings in the logic, as well as in the standard semantics, regarding the abstraction barrier provided by existential types. Two central contributions, namely abstraction barrier-observing simulation relations and abstraction barrier-observing semantics for data types, are the result of observing these short-comings. Finally, the work in this thesis also lays a foundation on which to adapt specification refinement to an object-oriented setting, because the theoretical concepts underlying object orientation can be seen as extensions of those for abstract data types

    Incompleteness of relational simulations in the blocking paradigm

    Get PDF
    Refinement is the notion of development between formal specifications For specifications given in a relational formalism downward and upward simulations are the standard method to verify that a refinement holds their usefulness based upon their soundness and joint completeness This is known to be true for total relational specifications and has been claimed to hold for partial relational specifications in both the non-blocking and blocking interpretations In this paper we show that downward and upward simulations in the blocking interpretation where domains are guards are not Jointly complete This contradicts earlier claims in the literature We illustrate this with an example (based on one recently constructed by Reeves and Streader) and then construct a proof to show why Joint completeness fails in general (C) 2010 Elsevier B V All rights reserve

    Adding HL7 version 3 data types to PostgreSQL

    Get PDF
    The HL7 standard is widely used to exchange medical information electronically. As a part of the standard, HL7 defines scalar communication data types like physical quantity, point in time and concept descriptor but also complex types such as interval types, collection types and probabilistic types. Typical HL7 applications will store their communications in a database, resulting in a translation from HL7 concepts and types into database types. Since the data types were not designed to be implemented in a relational database server, this transition is cumbersome and fraught with programmer error. The purpose of this paper is two fold. First we analyze the HL7 version 3 data type definitions and define a number of conditions that must be met, for the data type to be suitable for implementation in a relational database. As a result of this analysis we describe a number of possible improvements in the HL7 specification. Second we describe an implementation in the PostgreSQL database server and show that the database server can effectively execute scientific calculations with units of measure, supports a large number of operations on time points and intervals, and can perform operations that are akin to a medical terminology server. Experiments on synthetic data show that the user defined types perform better than an implementation that uses only standard data types from the database server.Comment: 12 pages, 9 figures, 6 table

    Relational Parametricity and Separation Logic

    Get PDF
    Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction

    Relational Approach to Knowledge Engineering for POMDP-based Assistance Systems as a Translation of a Psychological Model

    Get PDF
    Assistive systems for persons with cognitive disabilities (e.g. dementia) are difficult to build due to the wide range of different approaches people can take to accomplishing the same task, and the significant uncertainties that arise from both the unpredictability of client's behaviours and from noise in sensor readings. Partially observable Markov decision process (POMDP) models have been used successfully as the reasoning engine behind such assistive systems for small multi-step tasks such as hand washing. POMDP models are a powerful, yet flexible framework for modelling assistance that can deal with uncertainty and utility. Unfortunately, POMDPs usually require a very labour intensive, manual procedure for their definition and construction. Our previous work has described a knowledge driven method for automatically generating POMDP activity recognition and context sensitive prompting systems for complex tasks. We call the resulting POMDP a SNAP (SyNdetic Assistance Process). The spreadsheet-like result of the analysis does not correspond to the POMDP model directly and the translation to a formal POMDP representation is required. To date, this translation had to be performed manually by a trained POMDP expert. In this paper, we formalise and automate this translation process using a probabilistic relational model (PRM) encoded in a relational database. We demonstrate the method by eliciting three assistance tasks from non-experts. We validate the resulting POMDP models using case-based simulations to show that they are reasonable for the domains. We also show a complete case study of a designer specifying one database, including an evaluation in a real-life experiment with a human actor

    Modeling views in the layered view model for XML using UML

    Get PDF
    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction

    A robust semantics hides fewer errors

    Get PDF
    In this paper we explore how formal models are interpreted and to what degree meaning is captured in the formal semantics and to what degree it remains in the informal interpretation of the semantics. By applying a robust approach to the definition of refinement and semantics, favoured by the event-based community, to state-based theory we are able to move some aspects from the informal interpretation into the formal semantics
    • ā€¦
    corecore