103,679 research outputs found

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Distributed and decentralized control in fully distributed processing systems

    Get PDF
    Issued as Quarterly progress reports no. 1-5, and Final technical report, Project no. G-36-649Final technical report has title: Distributed and decentralized control in fully distributed processing system

    Computing fuzzy rough approximations in large scale information systems

    Get PDF
    Rough set theory is a popular and powerful machine learning tool. It is especially suitable for dealing with information systems that exhibit inconsistencies, i.e. objects that have the same values for the conditional attributes but a different value for the decision attribute. In line with the emerging granular computing paradigm, rough set theory groups objects together based on the indiscernibility of their attribute values. Fuzzy rough set theory extends rough set theory to data with continuous attributes, and detects degrees of inconsistency in the data. Key to this is turning the indiscernibility relation into a gradual relation, acknowledging that objects can be similar to a certain extent. In very large datasets with millions of objects, computing the gradual indiscernibility relation (or in other words, the soft granules) is very demanding, both in terms of runtime and in terms of memory. It is however required for the computation of the lower and upper approximations of concepts in the fuzzy rough set analysis pipeline. Current non-distributed implementations in R are limited by memory capacity. For example, we found that a state of the art non-distributed implementation in R could not handle 30,000 rows and 10 attributes on a node with 62GB of memory. This is clearly insufficient to scale fuzzy rough set analysis to massive datasets. In this paper we present a parallel and distributed solution based on Message Passing Interface (MPI) to compute fuzzy rough approximations in very large information systems. Our results show that our parallel approach scales with problem size to information systems with millions of objects. To the best of our knowledge, no other parallel and distributed solutions have been proposed so far in the literature for this problem

    Practical Distributed Control Synthesis

    Full text link
    Classic distributed control problems have an interesting dichotomy: they are either trivial or undecidable. If we allow the controllers to fully synchronize, then synthesis is trivial. In this case, controllers can effectively act as a single controller with complete information, resulting in a trivial control problem. But when we eliminate communication and restrict the supervisors to locally available information, the problem becomes undecidable. In this paper we argue in favor of a middle way. Communication is, in most applications, expensive, and should hence be minimized. We therefore study a solution that tries to communicate only scarcely and, while allowing communication in order to make joint decision, favors local decisions over joint decisions that require communication.Comment: In Proceedings INFINITY 2011, arXiv:1111.267

    On the Distributability of Mobile Ambients

    Get PDF
    Modern society is dependent on distributed software systems and to verify them different modelling languages such as mobile ambients were developed. To analyse the quality of mobile ambients as a good foundational model for distributed computation, we analyse the level of synchronisation between distributed components that they can express. Therefore, we rely on earlier established synchronisation patterns. It turns out that mobile ambients are not fully distributed, because they can express enough synchronisation to express a synchronisation pattern called M. However, they can express strictly less synchronisation than the standard pi-calculus. For this reason, we can show that there is no good and distributability-preserving encoding from the standard pi-calculus into mobile ambients and also no such encoding from mobile ambients into the join-calculus, i.e., the expressive power of mobile ambients is in between these languages. Finally, we discuss how these results can be used to obtain a fully distributed variant of mobile ambients.Comment: In Proceedings EXPRESS/SOS 2018, arXiv:1808.08071. Conference version of arXiv:1808.0159

    Datalog as a parallel general purpose programming language

    Get PDF
    The increasing available parallelism of computers demands new programming languages that make parallel programming dramatically easier and less error prone. It is proposed that datalog with negation and timestamps is a suitable basis for a general purpose programming language for sequential, parallel and distributed computers. This paper develops a fully incremental bottom-up interpreter for datalog that supports a wide range of execution strategies, with trade-offs affecting efficiency, parallelism and control of resource usage. Examples show how the language can accept real-time external inputs and outputs, and mimic assignment, all without departing from its pure logical semantics

    Research on fully distributed data processing systems

    Get PDF
    Issued as Quarterly progress reports, nos. 1-11, and Project report, Project no. G-36-64
    corecore