660 research outputs found

    Computing cost estimates for proof strategies

    Get PDF
    In this paper we extend work of Treitel and Genesereth for calculating cost estimates for alternative proof methods of logic programs. We consider four methods: (1) forward chaining by semi-naive bottom-up evaluation, (2) goal-directed forward chaining by semi-naive bottom-up evaluation after Generalized Magic-Sets rewriting, (3) backward chaining by OLD resolution, and (4) memoing backward chaining by OLDT resolution. The methods can interact during a proof. After motivating the advantages of each of the proof methods, we show how the effort for the proof can be estimated. The calculation is based on indirect domain knowledge like the number of initial facts and the number of possible values for variables. From this information we can estimate the probability that facts are derived multiple times. An important valuation factor for a proof strategy is whether these duplicates are eliminated. For systematic analysis we distinguish between in costs and out costs of a rule. The out costs correspond to the number of calls of a rule. In costs are the costs for proving the premises of a clause. Then we show how the selection of a proof method for one rule influences the effort of other rules. Finally we discuss problems of estimating costs for recursive rules and propose a solution for a restricted case

    Query Evaluation in Deductive Databases

    Get PDF
    It is desirable to answer queries posed to deductive databases by computing fixpoints because such computations are directly amenable to set-oriented fact processing. However, the classical fixpoint procedures based on bottom-up processing — the naive and semi-naive methods — are rather primitive and often inefficient. In this article, we rely on bottom-up meta-interpretation for formalizing a new fixpoint procedure that performs a different kind of reasoning: We specify a top-down query answering method, which we call the Backward Fixpoint Procedure. Then, we reconsider query evaluation methods for recursive databases. First, we show that the methods based on rewriting on the one hand, and the methods based on resolution on the other hand, implement the Backward Fixpoint Procedure. Second, we interpret the rewritings of the Alexander and Magic Set methods as specializations of the Backward Fixpoint Procedure. Finally, we argue that such a rewriting is also needed in a database context for implementing efficiently the resolution-based methods. Thus, the methods based on rewriting and the methods based on resolution implement the same top-down evaluation of the original database rules by means of auxiliary rules processed bottom-up

    Linear Tabulated Resolution Based on Prolog Control Strategy

    Full text link
    Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG- resolution, and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TP-resolution. TP-resolution has two distinctive features: (1) It makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound and complete for positive logic programs with the bounded-term-size property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.Comment: To appear as the first accepted paper in Theory and Practice of Logic Programming (http://www.cwi.nl/projects/alp/TPLP

    Principles and Implementation of Deductive Parsing

    Get PDF
    We present a system for generating parsers based directly on the metaphor of parsing as deduction. Parsing algorithms can be represented directly as deduction systems, and a single deduction engine can interpret such deduction systems so as to implement the corresponding parser. The method generalizes easily to parsers for augmented phrase structure formalisms, such as definite-clause grammars and other logic grammar formalisms, and has been used for rapid prototyping of parsing algorithms for a variety of formalisms including variants of tree-adjoining grammars, categorial grammars, and lexicalized context-free grammars.Comment: 69 pages, includes full Prolog cod

    GEM: a Distributed Goal Evaluation Algorithm for Trust Management

    Full text link
    Trust management is an approach to access control in distributed systems where access decisions are based on policy statements issued by multiple principals and stored in a distributed manner. In trust management, the policy statements of a principal can refer to other principals' statements; thus, the process of evaluating an access request (i.e., a goal) consists of finding a "chain" of policy statements that allows the access to the requested resource. Most existing goal evaluation algorithms for trust management either rely on a centralized evaluation strategy, which consists of collecting all the relevant policy statements in a single location (and therefore they do not guarantee the confidentiality of intensional policies), or do not detect the termination of the computation (i.e., when all the answers of a goal are computed). In this paper we present GEM, a distributed goal evaluation algorithm for trust management systems that relies on function-free logic programming for the specification of policy statements. GEM detects termination in a completely distributed way without disclosing intensional policies, thereby preserving their confidentiality. We demonstrate that the algorithm terminates and is sound and complete with respect to the standard semantics for logic programs.Comment: To appear in Theory and Practice of Logic Programming (TPLP

    SLT-Resolution for the Well-Founded Semantics

    Full text link
    Global SLS-resolution and SLG-resolution are two representative mechanisms for top-down evaluation of the well-founded semantics of general logic programs. Global SLS-resolution is linear for query evaluation but suffers from infinite loops and redundant computations. In contrast, SLG-resolution resolves infinite loops and redundant computations by means of tabling, but it is not linear. The principal disadvantage of a non-linear approach is that it cannot be implemented using a simple, efficient stack-based memory structure nor can it be easily extended to handle some strictly sequential operators such as cuts in Prolog. In this paper, we present a linear tabling method, called SLT-resolution, for top-down evaluation of the well-founded semantics. SLT-resolution is a substantial extension of SLDNF-resolution with tabling. Its main features include: (1) It resolves infinite loops and redundant computations while preserving the linearity. (2) It is terminating, and sound and complete w.r.t. the well-founded semantics for programs with the bounded-term-size property with non-floundering queries. Its time complexity is comparable with SLG-resolution and polynomial for function-free logic programs. (3) Because of its linearity for query evaluation, SLT-resolution bridges the gap between the well-founded semantics and standard Prolog implementation techniques. It can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.Comment: Slight modificatio

    Poetry at the first steps of Artificial Intelligence

    Get PDF
    This paper is about Artificial Intelligence (AI) attempts at writing poetry, usually referred to with the term “poetry generation”. Poetry generation started out from Digital Humanities, which developed out of humanities computing; nowadays, however, it is part of Computational Creativity, a field that tackles several areas of art and science. In the paper it is examined, first, why poetry was chosen among other literary genres as a field for experimentation. Mention is made to the characteristics of poetry (namely arbitrariness and absurdity) that make it fertile ground for such endeavors and also to various text- and reader-centered literary approaches that favored experimentation even by human poets. Then, a rough historic look at poetry generation is attempted, followed by a review of the methods employed, either for fun or as academic projects, along Lamb et al.’s (2017) taxonomy which distinguishes between mere poetry generation and result enhancement. Another taxonomy by Gonçalo Oliveira (2017), dividing between form and content issues in poetry generation, is also briefly presented. The results of poetry generators are evaluated as generally poor and the reasons for this failure are examined: inability of computers to understand any word as a sign with a signified, lack of general intelligence, process- (rather than output-) driven attempts, etc. Then, computer-like results from a number of human poetic movements are also presented as a juxtaposition: DADA, stream of consciousness, OuLiPo, LangPo, Flarf, blackout/erasure poetry. The equivalence between (i) human poets that are concerned more with experimentation more than with good results and (ii) computer scientists who are process-driven leads to a discussion of the characteristics of humanness, of the possibility of granting future AI personhood and of the need to see our world in terms of a new, more refined ontology

    The Practice of Basic Informatics 2020

    Get PDF
    Version 2020/04/02Kyoto University provides courses on 'The Practice of Basic Informatics' as part of its Liberal Arts and Sciences Program. The course is taught at many schools and departments, and course contents vary to meet the requirements of these schools and departments. This textbook is made open to the students of all schools that teach these courses. As stated in Chapter 1, this book is written with the aim of building ICT skills for study at university, that is, ICT skills for academic activities. Some topics may not be taught in class. However, the book is written for self-study by students. We include many exercises in this textbook so that instructors can select some of them for their classes, to accompany their teaching plans. The courses are given at the computer laboratories of the university, and the contents of this textbook assume that Windows 10 and Microsoft Office 2016 are available in these laboratories. In Chapter 13, we include an introduction to computer programming; we chose Python as the programming language because on the one hand it is easy for beginners to learn, and on the other, it is widely used in academic research. To check the progress of students' self-study, we have attached assessment criteria (a 'rubric') of this course as an Appendix. Current ICT is a product of the endeavors of many people. The "Great Idea" columns are included to show appreciation for such work. Dr. Yumi Kitamura and Dr. Hirohisa Hioki wrote Chapters 4 and 13, respectively. The remaining chapters were written by Dr. Hajime Kita. In revision for 2018 edition and after, Dr. Hiroyuki Sakai has participated in the author group, and Dr. Donghui Lin has also joined for English edition 2019. The authors hope that this textbook helps you to improve your academic ICT skill set. The content included in this book is selected based on the reference course plan discussed in the course development team for informatics at the Institute for Liberal Arts and Sciences. In writing this textbook, we obtained advice and suggestions from staffs of the Network Section, Information Infrastructure Division, Department of Planning and Information Management Department, Kyoto University on Chapters 2 and 3, from Mr. Sosuke Suzuki, NTT Communications Corporation also on Chapter 3, Rumi Haratake, Machiko Sakurai and Taku Sakamoto of the User Support Division, Kyoto University Library on Chapter 4. Dr. Masako Okamoto of Center for the Promotion of Excellence in Higher Education, Kyoto University helped us in revision of 2018 Japanese Edition. The authors would like to express their sincere gratitude to the people who supported them
    corecore