38 research outputs found

    Logic Programming as Constructivism

    Get PDF
    The features of logic programming that seem unconventional from the viewpoint of classical logic can be explained in terms of constructivistic logic. We motivate and propose a constructivistic proof theory of non-Horn logic programming. Then, we apply this formalization for establishing results of practical interest. First, we show that 'stratification can be motivated in a simple and intuitive way. Relying on similar motivations, we introduce the larger classes of 'loosely stratified' and 'constructively consistent' programs. Second, we give a formal basis for introducing quantifiers into queries and logic programs by defining 'constructively domain independent* formulas. Third, we extend the Generalized Magic Sets procedure to loosely stratified and constructively consistent programs, by relying on a 'conditional fixpoini procedure

    Fixpoints and Bounded Fixpoints for Complex Objects

    Get PDF
    We investigate a query language for complex-object databases, which is designed to (1) express only tractable queries, and (2) be as expressive over flat relations as first order logic with fixpoints. The language is obtained by extending the nested relational algebra NRA with a bounded fixpoint operator. As in the flat case, all PTime computable queries over ordered databases are expressible in this language. The main result consists in proving that this language is a conservative extension of the first order logic with fixpoints, or of the while-queries (depending on the interpretation of the bounded fixpoint: inflationary or partial). The proof technique uses indexes, to encode complex objects into flat relations, and is strong enough to allow for the encoding of NRA with unbounded fixpoints into flat relations. We also define a logic based language with fixpoints, the nested relational calculus , and prove that its range restricted version is equivalent to NRA with bounded fixpoints

    Towards Tractable Algebras for Bags

    Get PDF
    AbstractBags, i.e., sets with duplicates, are often used to implement relations in database systems. In this paper, we study the expressive power of algebras for manipulating bags. The algebra we present is a simple extension of the nested relation algebra. Our aim is to investigate how the use of bags in the language extends its expressive power and increases its complexity. We consider two main issues, namely (i) the impact of the depth of bag nesting on the expressive power and (ii) the complexity and the expressive power induced by the algebraic operations. We show that the bag algebra is more expressive than the nested relation algebra (at all levels of nesting), and that the difference may be subtle. We establish a hierarchy based on the structure of algebra expressions. This hierarchy is shown to be highly related to the properties of the powerset operator

    Logic Programming: Context, Character and Development

    Get PDF
    Logic programming has been attracting increasing interest in recent years. Its first realisation in the form of PROLOG demonstrated concretely that Kowalski's view of computation as controlled deduction could be implemented with tolerable efficiency, even on existing computer architectures. Since that time logic programming research has intensified. The majority of computing professionals have remained unaware of the developments, however, and for some the announcement that PROLOG had been selected as the core language for the Japanese 'Fifth Generation' project came as a total surprise. This thesis aims to describe the context, character and development of logic programming. It explains why a radical departure from existing software practices needs to be seriously discussed; it identifies the characteristic features of logic programming, and the practical realisation of these features in current logic programming systems; and it outlines the programming methodology which is proposed for logic programming. The problems and limitations of existing logic programming systems are described and some proposals for development are discussed. The thesis is in three parts. Part One traces the development of programming since the early days of computing. It shows how the problems of software complexity which were addressed by the 'structured programming' school have not been overcome: the software crisis remains severe and seems to require fundamental changes in software practice for its solution. Part Two describes the foundations of logic programming in the procedural interpretation of Horn clauses. Fundamental to logic programming is shown to be the separation of the logic of an algorithm from its control. At present, however, both the logic and the control aspects of logic programming present problems; the first in terms of the extent of the language which is used, and the second in terms of the control strategy which should be applied in order to produce solutions. These problems are described and various proposals, including some which have been incorporated into implemented systems, are described. Part Three discusses the software development methodology which is proposed for logic programming. Some of the experience of practical applications is related. Logic programming is considered in the aspects of its potential for parallel execution and in its relationship to functional programming, and some possible criticisms of the problem-solving potential of logic are described. The conclusion is that although logic programming inevitably has some problems which are yet to be solved, it seems to offer answers to several issues which are at the heart of the software crisis. The potential contribution of logic programming towards the development of software should be substantial

    Recovering Structural Information for Better Static Analysis

    Get PDF
    Η στατική ανάλυση στοχεύει στην κατανόηση της συμπεριφοράς του προγράμματος, μέσω αυτοματοποιημένων τεχνικών συμπερασμού που βασίζονται καθαρά στον πηγαίο κώδικα του προγράμματος, αλλά δεν προϋποθέτουν την εκτέλεσή του. Για να πετύχουν αυτές οι τεχνικές μία ευρεία κατανόηση του κώδικα, καταφεύγουν στη δημιουργία ενός αφηρημένου μοντέλου της μνήμης, το οποίο καλύπτει όλες τις πιθανές εκτελέσεις. Αφηρημένα μοντέλα τέτοιου τύπου μπορεί γρήγορα να εκφυλιστούν, αν χάσουν σημαντική δομική πληροφορία των αντικειμένων στη μνήμη που περιγράφουν. Αυτό συνήθως συμβαίνει λόγω χρήσης συγκεκριμένων προγραμματιστικών ιδιωμάτων και χαρακτηριστικών της γλώσσας προγραμματισμού, ή λόγω πρακτικών περιορισμών της ανάλυσης. Σε αρκετές περιπτώσεις, ένα σημαντικό μέρος της χαμένης αυτής δομικής πληροφορίας μπορεί να ανακτηθεί μέσω σύνθετης λογικής, η οποία παρακολουθεί την έμμεση χρήση τύπων, και να χρησιμοποιηθεί προς όφελος της στατικής ανάλυσης του προγράμματος. Στη διατριβή αυτή παρουσιάζουμε διάφορους τρόπους ανάκτησης δομικής πληροφορίας, πρώτα (1) σε προγράμματα C/C++, κι έπειτα, σε προγράμματα γλωσσών υψηλότερου επιπέδου που δεν προσφέρουν άμεση πρόσβαση μνήμης, όπως η Java, όπου αναγνωρίζουμε δύο βασικές πηγές απώλειας δομικής πληροφορίας: (2) χρήση ανάκλασης και (3) ανάλυση μερικών προγραμμάτων. Δείχνουμε πως, σε όλες τις παραπάνω περιπτώσεις, η ανάκτηση τέτοιας δομικής πληροφορίας βελτιώνει άμεσα τη στατική ανάλυση του προγράμματος. Παρουσιάζουμε μία ανάλυση δεικτών για C/C++, η οποία βελτιώνει το επίπεδο της αφαίρεσης, βασιζόμενη σε πληροφορία τύπου που ανακαλύπτει κατά τη διάρκεια της ανάλυσης. Παρέχουμε μία υλοποίηση της ανάλυσης αυτής, στο cclyzer, ένα εργαλείο στατικής ανάλυσης για LLVM bitcode. Έπειτα, παρουσιάζουμε επεκτάσεις σε ανάλυση δεικτών για Java, κτίζοντας πάνω σε σύγχρονες τεχνικές χειρισμού μηχανισμών ανάκλασης. Η βασική αρχή είναι παραπλήσια με την περίπτωση της C/C++: καταγράφουμε τη χρήση των ανακλαστικών αντικειμένων, κατά τη διάρκεια της ανάλυσης δεικτών, ώστε να ανακαλύψουμε βασικά δομικά τους στοιχεία, τα οποία μπορούμε να χρησιμοποιήσουμε έπειτα για να βελτιώσουμε τον χειρισμό των εντολών ανάκλασης στην τρέχουσα ανάλυση, με αμοιβαία αναδρομικό τρόπο. Τέλος, ως προς την ανάλυση μερικών προγραμμάτων Java, ορίζουμε το γενικό πρόβλημα της ((συμπλήρωσης προγράμματος)): δοθέντος ενός μερικού προγράμματος, πως να εφεύρουμε ένα υποκατάστατο του κώδικα που λείπει, έτσι ώστε αυτό να ικανοποιεί τους περιορισμούς των στατικών και δυναμικών τύπων που υπονοούνται από τον υπάρχοντα κώδικα. Ή διαφορετικά, πως να ανακτήσουμε τη δομή των τύπων που λείπουν. Πέραν της ανακάλυψης των μελών (πεδίων και μεθόδων) των κλάσεων που λείπουν, η ικανοποίηση των περιορισμών υποτυπισμού μας οδηγεί στον ορισμό ενός πρωτότυπου αλγοριθμικού προβλήματος: τη συμπλήρωση ιεραρχίας τύπων. Παρέχουμε αλγορίθμους που λύνουν το πρόβλημα αυτό σε διάφορα είδη κληρονομικότητας (μονής, πολλαπλής, μεικτής) και τους υλοποιούμε στο JPhantom, ένα νέο εργαλείο συμπλήρωσης Java bytecode κώδικα.Static analysis aims to achieve an understanding of program behavior, by means of automatic reasoning that requires only the program’s source code and not any actual execution. To reach a truly broad level of program understanding, static analysis techniques need to create an abstraction of memory that covers all possible executions. Such abstract models may quickly degenerate after losing essential structural information about the memory objects they describe, due to the use of specific programming idioms and language features, or because of practical analysis limitations. In many cases, some of the lost memory structure may be retrieved, though it requires complex inference that takes advantage of indirect uses of types. Such recovered structural information may, then, greatly benefit static analysis. This dissertation shows how we can recover structural information, first (i) in the context of C/C++, and next, in the context of higher-level languages without direct memory access, like Java, where we identify two primary causes of losing memory structure: (ii) the use of reflection, and (iii) analysis of partial programs. We show that, in all cases, the recovered structural information greatly benefits static analysis on the program. For C/C++, we introduce a structure-sensitive pointer analysis that refines its abstraction based on type information that it discovers on-they-fly. This analysis is implemented in cclyzer, a static analysis tool for LLVM bitcode. Next, we present techniques that extend a standard Java pointer analysis by building on top of state-of-the-art handling of reflection. The principle is similar to that of our structure-sensitive analysis for C/C++: track the use of reflective objects, during pointer analysis, to gain important insights on their structure, which can be used to “patch” the handling of reflective operations on the running analysis, in a mutually recursive fashion. Finally, to address the challenge of analyzing partial Java programs in full generality, we define the problem of “program complementation”: given a partial program we seek to provide definitions for its missing parts so that the “complement” satisfies all static and dynamic typing requirements induced by the code under analysis. Essentially, complementation aims to recover the structure of phantom types. Apart from discovering missing class members (i.e., fields and methods), satisfying the subtyping constraints leads to the formulation of a novel typing problem in the OO context, regarding type hierarchy complementation. We offer algorithms to solve this problem in various inheritance settings, and implement them in JPhantom, a practical tool for Java bytecode complementation

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    Logische Grundlagen von Datenbanktransformationen für Datenbanken mit komplexen Typen

    Get PDF
    Database transformations consist of queries and updates which are two fundamental types of computations in any databases - the first provides the capability to retrieve data and the second is used to maintain databases in light of ever-changing application domains. With the rising popularity of web-based applications and service-oriented architectures, the development of database transformations must address new challenges, which frequently call for establishing a theoretical framework that unifies both queries and updates over complex-value databases. This dissertation aims to lay down the foundations for establishing a theoretical framework of database transformations in the context of complex-value databases. We shall use an approach that has successfully been used for the characterisation of sequential algorithms. The sequential Abstract State Machine (ASM) thesis captures semantics and behaviour of sequential algorithms. The thesis uses the similarity of general computations and database transformations for characterisation of the later by five postulates: sequential time postulate, abstract state postulate, bounded exploration postulate, background postulate, and the bounded non-determinism postulate. The last two postulates reflect the specific form of transformations for databases. The five postulates exactly capture database transformations. Furthermore, we provide a logical proof system for database transformations that is sound and complete.Datenbanktransformationen sind Anfragen an ein Datenbanksystem oder Modifikationen der Daten des Datenbanksystemes. Diese beiden grundlegenden Arten von Berechnungen auf Datenbanksystemen erlauben zum einem den Zugriff auf Daten und zum anderen die Pflege der Datenbank. Eine theoretische Fundierung von Datenbanktransformationen muss so flexibel sein, dass auch neue web-basierten Anwendungen und den neuen serviceorientierte Architekturen reflektiert sind, sowie auch die komplexeren Datenstrukturen. Diese Dissertation legt die Grundlagen für eine Theoriefundierung durch Datenbanktransformationen, die auch komplexe Datenstrukturen unterstützen. Wir greifen dabei auf einen Zugang zurück, der eine Theorie der sequentiellen Algorithmen bietet. Die sequentielle ASM-These (abstrakte Zustandsmaschinen) beschreibt die Semantik und das Verhalten sequentieller Algorithmen. Die Dissertation nutzt dabei die Gleichartigkeit von allgemeinen Berechnungen und Datenbanktransformationen zur Charakterisierung durch fünf Postulate bzw. Axiome: das Axiom der sequentiellen Ausführung, das Axiom einer abstrakten Charakterisierbarkeit von Zuständen, das Axiom der Begrenzbarkeit von Zustandsänderungen und Zustandssicht, das Axiom der Strukturierung von Datenbanken und das Axiom der Begrenzbarkeit des Nichtdeterminismus. Die letzten beiden Axiome reflektieren die spezifische Seite der Datenbankberechnungen. Die fünf Axiome beschreiben vollständig das Verhalten von Datenbanktransformationen. Weiterhin wird eine Beweiskalkül für Datenbanktransformationen entwickelt, der vollständig und korrekt ist

    Computational Complexity And Algorithms For Dirty Data Evaluation And Repairing

    Get PDF
    In this dissertation, we study the dirty data evaluation and repairing problem in relational database. Dirty data is usually inconsistent, inaccurate, incomplete and stale. Existing methods and theories of consistency describe using integrity constraints, such as data dependencies. However, integrity constraints are good at detection but not at evaluating the degree of data inconsistency and cannot guide the data repairing. This dissertation first studies the computational complexity of and algorithms for the database inconsistency evaluation. We define and use the minimum tuple deletion to evaluate the database inconsistency. For such minimum tuple deletion problem, we study the relationship between the size of rule set and its computational complexity. We show that the minimum tuple deletion problem is NP-hard to approximate the minimum tuple deletion within 17/16 if given three functional dependencies and four attributes involved. A near optimal approximated algorithm for computing the minimum tuple deletion is proposed with a ratio of 2 − 1/2r , where r is the number of given functional dependencies. To guide the data repairing, this dissertation also investigates the data repairing method by using query feedbacks, formally studies two decision problems, functional dependency restricted deletion and insertion propagation problem, corresponding to the feedbacks of deletion and insertion. A comprehensive analysis on both combined and data complexity of the cases is provided by considering different relational operators and feedback types. We have identified the intractable and tractable cases to picture the complexity hierarchy of these problems, and provided the efficient algorithm on these tractable cases. Two improvements are proposed, one focuses on figuring out the minimum vertex cover in conflict graph to improve the upper bound of tuple deletion problem, and the other one is a better dichotomy for deletion and insertion propagation problems at the absence of functional dependencies from the point of respectively considering data, combined and parameterized complexities

    1957-2007: 50 Years of Higher Order Programming Languages

    Get PDF
    Fifty years ago one of the greatest breakthroughs in computer programming and in the history of computers happened – the appearance of FORTRAN, the first higher-order programming language. From that time until now hundreds of programming languages were invented, different programming paradigms were defined, all with the main goal to make computer programming easier and closer to as many people as possible. Many battles were fought among scientists as well as among developers around concepts of programming, programming languages and paradigms. It can be said that programming paradigms and programming languages were very often a trigger for many changes and improvements in computer science as well as in computer industry. Definitely, computer programming is one of the cornerstones of computer science. Today there are many tools that give a help in the process of programming, but there is still a programming tasks that can be solved only manually. Therefore, programming is still one of the most creative parts of interaction with computers. Programmers should chose programming language in accordance to task they have to solve, but very often, they chose it in accordance to their personal preferences, their beliefs and many other subjective reasons. Nevertheless, the market of programming languages can be merciless to languages as history was merciless to some people, even whole nations. Programming languages and developers get born, live and die leaving more or less tracks and successors, and not always the best survives. The history of programming languages is closely connected to the history of computers and computer science itself. Every single thing from one of them has its reflexions onto the other. This paper gives a short overview of last fifty years of computer programming and computer programming languages, but also gives many ideas that influenced other aspects of computer science. Particularly, programming paradigms are described, their intentions and goals, as well as the most of the significant languages of all paradigms

    Constructive Reasoning for Semantic Wikis

    Get PDF
    One of the main design goals of social software, such as wikis, is to support and facilitate interaction and collaboration. This dissertation explores challenges that arise from extending social software with advanced facilities such as reasoning and semantic annotations and presents tools in form of a conceptual model, structured tags, a rule language, and a set of novel forward chaining and reason maintenance methods for processing such rules that help to overcome the challenges. Wikis and semantic wikis were usually developed in an ad-hoc manner, without much thought about the underlying concepts. A conceptual model suitable for a semantic wiki that takes advanced features such as annotations and reasoning into account is proposed. Moreover, so called structured tags are proposed as a semi-formal knowledge representation step between informal and formal annotations. The focus of rule languages for the Semantic Web has been predominantly on expert users and on the interplay of rule languages and ontologies. KWRL, the KiWi Rule Language, is proposed as a rule language for a semantic wiki that is easily understandable for users as it is aware of the conceptual model of a wiki and as it is inconsistency-tolerant, and that can be efficiently evaluated as it builds upon Datalog concepts. The requirement for fast response times of interactive software translates in our work to bottom-up evaluation (materialization) of rules (views) ahead of time – that is when rules or data change, not when they are queried. Materialized views have to be updated when data or rules change. While incremental view maintenance was intensively studied in the past and literature on the subject is abundant, the existing methods have surprisingly many disadvantages – they do not provide all information desirable for explanation of derived information, they require evaluation of possibly substantially larger Datalog programs with negation, they recompute the whole extension of a predicate even if only a small part of it is affected by a change, they require adaptation for handling general rule changes. A particular contribution of this dissertation consists in a set of forward chaining and reason maintenance methods with a simple declarative description that are efficient and derive and maintain information necessary for reason maintenance and explanation. The reasoning methods and most of the reason maintenance methods are described in terms of a set of extended immediate consequence operators the properties of which are proven in the classical logical programming framework. In contrast to existing methods, the reason maintenance methods in this dissertation work by evaluating the original Datalog program – they do not introduce negation if it is not present in the input program – and only the affected part of a predicate’s extension is recomputed. Moreover, our methods directly handle changes in both data and rules; a rule change does not need to be handled as a special case. A framework of support graphs, a data structure inspired by justification graphs of classical reason maintenance, is proposed. Support graphs enable a unified description and a formal comparison of the various reasoning and reason maintenance methods and define a notion of a derivation such that the number of derivations of an atom is always finite even in the recursive Datalog case. A practical approach to implementing reasoning, reason maintenance, and explanation in the KiWi semantic platform is also investigated. It is shown how an implementation may benefit from using a graph database instead of or along with a relational database
    corecore