90,127 research outputs found

    Models for persistence in lazy functional programming systems

    Get PDF
    Research into providing support for long term data in lazy functional programming systems is presented in this thesis. The motivation for this work has been to reap the benefits of integrating lazy functional programming languages and persistence. The benefits are: the programmer need not write code to support long term data since this is provided as part of the programming system; persistent data can be used in a type safe way since the programming language type system applies to data with the whole range of persistence; the benefits of lazy evaluation are extended to the full lifetime of a data value. Whilst data is reachable, any evaluation performed on the data persists. A data value changes monotonically from an unevaluated state towards a completely evaluated state over time. Interactive data intensive applications such as functional databases can be developed. These benefits are realised by the development of models for persistence in lazy functional programming systems. Two models are proposed which make persistence available to the functional programmer. The first, persistent modules, allows values named in modules to be stored in persistent storage for later reuse. The second model, stream persistence allows dynamic, interactive access to persistent storage. These models are supported by a system architecture which incorporates a persistent abstract machine, PCASE, integrated with a persistent object store. The resulting persistent lazy functional programming system, Staple, is used in prototyping and functional database modelling experiments

    Integrating testing techniques through process programming

    Get PDF
    Integration of multiple testing techniques is required to demonstrate high quality of software. Technique integration has three basic goals: incremental testing capabilities, extensive error detection, and cost-effective application. We are experimenting with the use of process programming as a mechanism of integrating testing techniques. Having set out to integrate DATA FLOW testing and RELAY, we proposed synergistic use of these techniques to achieve all three goals. We developed a testing process program much as we would develop a software product from requirements through design to implementation and evaluation. We found process programming to be effective for explicitly integrating the techniques and achieving the desired synergism. Used in this way, process programming also mitigates many of the other problems that plague testing in the software development process

    Compiling ER Specifications into Declarative Programs

    Full text link
    This paper proposes an environment to support high-level database programming in a declarative programming language. In order to ensure safe database updates, all access and update operations related to the database are generated from high-level descriptions in the entity- relationship (ER) model. We propose a representation of ER diagrams in the declarative language Curry so that they can be constructed by various tools and then translated into this representation. Furthermore, we have implemented a compiler from this representation into a Curry program that provides access and update operations based on a high-level API for database programming.Comment: Paper presented at the 17th Workshop on Logic-based Methods in Programming Environments (WLPE2007

    Twelve Theses on Reactive Rules for the Web

    Get PDF
    Reactivity, the ability to detect and react to events, is an essential functionality in many information systems. In particular, Web systems such as online marketplaces, adaptive (e.g., recommender) systems, and Web services, react to events such as Web page updates or data posted to a server. This article investigates issues of relevance in designing high-level programming languages dedicated to reactivity on the Web. It presents twelve theses on features desirable for a language of reactive rules tuned to programming Web and Semantic Web applications

    Inductive benchmarking for purely functional data structures

    Get PDF
    Every designer of a new data structure wants to know how well it performs in comparison with others. But finding, coding and testing applications as benchmarks can be tedious and time-consuming. Besides, how a benchmark uses a data structure may considerably affect its apparent efficiency, so the choice of applications may bias the results. We address these problems by developing a tool for inductive benchmarking. This tool, Auburn, can generate benchmarks across a wide distribution of uses. We precisely define 'the use of a data structure', upon which we build the core algorithms of Auburn: how to generate a benchmark from a description of use, and how to extract a description of use from an application. We then apply inductive classification techniques to obtain decision trees for the choice between competing data structures. We test Auburn by benchmarking several implementations of three common data structures: queues, random-access lists and heaps. These and other results show Auburn to be a useful and accurate tool, but they also reveal some limitations of the approach

    Applied type system

    Full text link
    We present a type system that can effectively facilitate the use of types in capturing invariants in stateful programs that may involve (sophisticated) pointer manipulation. With its root in a recently developed framework Applied Type System (ATS), the type system imposes a level of abstraction on program states by introducing a novel notion of recursive stateful views and then relies on a form of linear logic to reason about such views. We consider the design and then the formalization of the type system to constitute the primary contribution of the paper. In addition, we mention a prototype implementation of the type system and then give a variety of examples that attests to the practicality of programming with recursive stateful views.National Science Foundation (CCR-0224244, CCR-0229480

    GUBS, a Behavior-based Language for Open System Dedicated to Synthetic Biology

    Full text link
    In this article, we propose a domain specific language, GUBS (Genomic Unified Behavior Specification), dedicated to the behavioral specification of synthetic biological devices, viewed as discrete open dynamical systems. GUBS is a rule-based declarative language. By contrast to a closed system, a program is always a partial description of the behavior of the system. The semantics of the language accounts the existence of some hidden non-specified actions possibly altering the behavior of the programmed device. The compilation framework follows a scheme similar to automatic theorem proving, aiming at improving synthetic biological design safety.Comment: In Proceedings MeCBIC 2012, arXiv:1211.347

    Active data structures on GPGPUs

    Get PDF
    Active data structures support operations that may affect a large number of elements of an aggregate data structure. They are well suited for extremely fine grain parallel systems, including circuit parallelism. General purpose GPUs were designed to support regular graphics algorithms, but their intermediate level of granularity makes them potentially viable also for active data structures. We consider the characteristics of active data structures and discuss the feasibility of implementing them on GPGPUs. We describe the GPU implementations of two such data structures (ESF arrays and index intervals), assess their performance, and discuss the potential of active data structures as an unconventional programming model that can exploit the capabilities of emerging fine grain architectures such as GPUs

    Evolving database systems : a persistent view

    Get PDF
    Submitted to POS7 This work was supported in St Andrews by EPSRC Grant GR/J67611 "Delivering the Benefits of Persistence"Orthogonal persistence ensures that information will exist for as long as it is useful, for which it must have the ability to evolve with the growing needs of the application systems that use it. This may involve evolution of the data, meta-data, programs and applications, as well as the users' perception of what the information models. The need for evolution has been well recognised in the traditional (data processing) database community and the cost of failing to evolve can be gauged by the resources being invested in interfacing with legacy systems. Zdonik has identified new classes of application, such as scientific, financial and hypermedia, that require new approaches to evolution. These applications are characterised by their need to store large amounts of data whose structure must evolve as it is discovered by the applications that use it. This requires that the data be mapped dynamically to an evolving schema. Here, we discuss the problems of evolution in these new classes of application within an orthogonally persistent environment and outline some approaches to these problems.Postprin
    • …
    corecore