6,756 research outputs found

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page

    A semantics and implementation of a causal logic programming language

    Get PDF
    The increasingly widespread availability of multicore and manycore computers demands new programming languages that make parallel programming dramatically easier and less error prone. This paper describes a semantics for a new class of declarative programming languages that support massive amounts of implicit parallelism

    Incremental View Maintenance For Collection Programming

    Get PDF
    In the context of incremental view maintenance (IVM), delta query derivation is an essential technique for speeding up the processing of large, dynamic datasets. The goal is to generate delta queries that, given a small change in the input, can update the materialized view more efficiently than via recomputation. In this work we propose the first solution for the efficient incrementalization of positive nested relational calculus (NRC+) on bags (with integer multiplicities). More precisely, we model the cost of NRC+ operators and classify queries as efficiently incrementalizable if their delta has a strictly lower cost than full re-evaluation. Then, we identify IncNRC+; a large fragment of NRC+ that is efficiently incrementalizable and we provide a semantics-preserving translation that takes any NRC+ query to a collection of IncNRC+ queries. Furthermore, we prove that incremental maintenance for NRC+ is within the complexity class NC0 and we showcase how recursive IVM, a technique that has provided significant speedups over traditional IVM in the case of flat queries [25], can also be applied to IncNRC+.Comment: 24 pages (12 pages plus appendix

    DATA CONSTRUCTORS: ON THE INTEGRATION OF RULES AND RELATIONS

    Get PDF
    Although the goals and means of rule-based and data-based systems are too different to be fully integrated at the present time, it seems appropriate to investigate a closer integration of language constructs and a better cooperation of execution models for both kinds of approaches. In this paper, we propose a new language construct called constructor that â when applied to a base relation â causes relation membership to become true for all tuples constructable through the predicates provided by the constructor definition. The approach is shown to provide expressive power at least equivalent to PROLOG's declarative semantics while blending well both with a strongly typed modular programming language and with a relational calculus query formalism. A three-step compilation, optimization, and evaluation methodology for expressions with constructed relations is described that integrates constructors with the surrounding database programming environment. In particular, many recursive queries can be evaluated more efficiently within the set-construction framework of database systems than with proof-oriented methods typical for a rule-based approach.Information Systems Working Papers Serie

    Variadic genericity through linguistic reflection : a performance evaluation

    Get PDF
    This work is partially supported by the EPSRC through Grant GR/L32699 “Compliant System Architecture” and by ESPRIT through Working Group EP22552 “PASTEL”.The use of variadic genericity within schema definitions increases the variety of databases that may be captured by a single specification. For example, a class of databases of engineering part objects, in which each database instance varies in the types of the parts and the number of part types, should lend itself to a single definition. However, precise specification of such a schema is beyond the capability of polymorphic type systems and schema definition languages. It is possible to capture such generality by introducing a level of interpretation, in which the variation in types and in the number of fields is encoded in a general data structure. Queries that interpret the encoded information can be written against this general data structure. An alternative approach to supporting such variadic genericity is to generate a precise database containing tailored data structures and queries for each different instance of the virtual schema.1 This involves source code generation and dynamic compilation, a process known as linguistic reflection. The motivation is that once generated, the specific queries may execute more efficiently than their generic counter-parts, since the generic code is “compiled away”. This paper compares the two approaches and gives performance measurements for an example using the persistent languages Napier88 and PJama.Postprin

    Factorised Representations of Query Results

    Full text link
    Query tractability has been traditionally defined as a function of input database and query sizes, or of both input and output sizes, where the query result is represented as a bag of tuples. In this report, we introduce a framework that allows to investigate tractability beyond this setting. The key insight is that, although the cardinality of a query result can be exponential, its structure can be very regular and thus factorisable into a nested representation whose size is only polynomial in the size of both the input database and query. For a given query result, there may be several equivalent representations, and we quantify the regularity of the result by its readability, which is the minimum over all its representations of the maximum number of occurrences of any tuple in that representation. We give a characterisation of select-project-join queries based on the bounds on readability of their results for any input database. We complement it with an algorithm that can find asymptotically optimal upper bounds and corresponding factorised representations.Comment: 44 pages, 13 figure

    Benchmarking graph databased with gMark

    Get PDF
    corecore