17,859 research outputs found

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    First-Class Subtypes

    Full text link
    First class type equalities, in the form of generalized algebraic data types (GADTs), are commonly found in functional programs. However, first-class representations of other relations between types, such as subtyping, are not yet directly supported in most functional programming languages. We present several encodings of first-class subtypes using existing features of the OCaml language (made more convenient by the proposed modular implicits extension), show that any such encodings are interconvertible, and illustrate the utility of the encodings with several examples.Comment: In Proceedings ML 2017, arXiv:1905.0590

    Fine-grained visualization pipelines and lazy functional languages

    Get PDF
    The pipeline model in visualization has evolved from a conceptual model of data processing into a widely used architecture for implementing visualization systems. In the process, a number of capabilities have been introduced, including streaming of data in chunks, distributed pipelines, and demand-driven processing. Visualization systems have invariably built on stateful programming technologies, and these capabilities have had to be implemented explicitly within the lower layers of a complex hierarchy of services. The good news for developers is that applications built on top of this hierarchy can access these capabilities without concern for how they are implemented. The bad news is that by freezing capabilities into low-level services expressive power and flexibility is lost. In this paper we express visualization systems in a programming language that more naturally supports this kind of processing model. Lazy functional languages support fine-grained demand-driven processing, a natural form of streaming, and pipeline-like function composition for assembling applications. The technology thus appears well suited to visualization applications. Using surface extraction algorithms as illustrative examples, and the lazy functional language Haskell, we argue the benefits of clear and concise expression combined with fine-grained, demand-driven computation. Just as visualization provides insight into data, functional abstraction provides new insight into visualization

    Optimisation and parallelism in synchronous digital circuit simulators

    Get PDF
    Digital circuit simulation often requires a large amount of computation, resulting in long run times. We consider several techniques for optimising a brute force synchronous circuit simulator: an algorithm using an event queue that avoids recalculating quiescent parts of the circuit, a marking algorithm that is similar to the event queue but that avoids a central data structure, and a lazy algorithm that avoids calculating signals whose values are not needed. Two target architectures for the simulator are used: a sequential CPU, and a parallel GPGPU. The interactions between the different optimisations are discussed, and the performance is measured while the algorithms are simulating a simple but realistic scalable circuit

    Efficient data structures for backtrack search SAT solvers

    No full text
    The implementation of efficient Propositional Satisfiability (SAT) solvers entails the utilization of highly efficient data structures, as illustrated by most of the recent state-of-the-art SAT solvers. However, it is in general hard to compare existing data structures, since different solvers are often characterized by fairly different algorithmic organizations and techniques, and by different search strategies and heuristics. This paper aims the evaluation of data structures for backtrack search SAT solvers, under a common unbiased SAT framework. In addition, advantages and drawbacks of each existing data structure are identified. Finally, new data structures are proposed, that are competitive with the most efficient data structures currently available, and that may be preferable for the next generation SAT solvers

    Optimizing Abstract Abstract Machines

    Full text link
    The technique of abstracting abstract machines (AAM) provides a systematic approach for deriving computable approximations of evaluators that are easily proved sound. This article contributes a complementary step-by-step process for subsequently going from a naive analyzer derived under the AAM approach, to an efficient and correct implementation. The end result of the process is a two to three order-of-magnitude improvement over the systematically derived analyzer, making it competitive with hand-optimized implementations that compute fundamentally less precise results.Comment: Proceedings of the International Conference on Functional Programming 2013 (ICFP 2013). Boston, Massachusetts. September, 201

    Functional programming languages for verification tools: experiences with ML and Haskell

    Get PDF
    We compare Haskell with ML as programming languages for verification tools, based on our experience developing TRUTH in Haskell and the Edinburgh Concurrency Workbench (CWB) in ML. We discuss not only technical language features but also the "worlds" of the languages, for example, the availability of tools and libraries
    corecore