11,504 research outputs found

    The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence

    Get PDF
    Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page

    A framework for incremental learning of logic programs

    Get PDF
    AbstractIn this paper, a framework for incremental learning is proposed. The predicates already learned are used as background knowledge in learning new predicates in this framework. The programs learned in this way have nice modular structure with conceptually separate components. This modularity gives the advantages of portability, reliability and efficient compilation and execution.Starting with a simple idea of Miyano et al. [21,22] for identifying classes of programs which satisfy the condition that all the terms occurring SLD-derivations starting with a query are no bigger than the terms in the initial query, we identify a reasonably big class of polynomial time learnable logic programs. These programs can be learned from a given sequence of examples and a logic program defining the already known predicates. Our class properly contains the class of innermost simple programs of [32] and the class of hereditary programs of [21,22]. Standard programs for gcd, multiplication, quick-sort, reverse and merge are a few examples of programs that can be handled by our results but not by the earlier results of [21,22, 32]

    E-Generalization Using Grammars

    Full text link
    We extend the notion of anti-unification to cover equational theories and present a method based on regular tree grammars to compute a finite representation of E-generalization sets. We present a framework to combine Inductive Logic Programming and E-generalization that includes an extension of Plotkin's lgg theorem to the equational case. We demonstrate the potential power of E-generalization by three example applications: computation of suggestions for auxiliary lemmas in equational inductive proofs, computation of construction laws for given term sequences, and learning of screen editor command sequences.Comment: 49 pages, 16 figures, author address given in header is meanwhile outdated, full version of an article in the "Artificial Intelligence Journal", appeared as technical report in 2003. An open-source C implementation and some examples are found at the Ancillary file

    The SP theory of intelligence: benefits and applications

    Full text link
    This article describes existing and expected benefits of the "SP theory of intelligence", and some potential applications. The theory aims to simplify and integrate ideas across artificial intelligence, mainstream computing, and human perception and cognition, with information compression as a unifying theme. It combines conceptual simplicity with descriptive and explanatory power across several areas of computing and cognition. In the "SP machine" -- an expression of the SP theory which is currently realized in the form of a computer model -- there is potential for an overall simplification of computing systems, including software. The SP theory promises deeper insights and better solutions in several areas of application including, most notably, unsupervised learning, natural language processing, autonomous robots, computer vision, intelligent databases, software engineering, information compression, medical diagnosis and big data. There is also potential in areas such as the semantic web, bioinformatics, structuring of documents, the detection of computer viruses, data fusion, new kinds of computer, and the development of scientific theories. The theory promises seamless integration of structures and functions within and between different areas of application. The potential value, worldwide, of these benefits and applications is at least $190 billion each year. Further development would be facilitated by the creation of a high-parallel, open-source version of the SP machine, available to researchers everywhere.Comment: arXiv admin note: substantial text overlap with arXiv:1212.022

    Constraints on predicate invention

    Get PDF
    This chapter describes an inductive learning method that derives logic programs and invents predicates when needed. The basic idea is to form the least common anti-instance (LCA) of selected seed examples. If the LCA is too general it forms the starting poínt of a gneral-to-specific search which is guided by various constraints on argument dependencies and critical terms. A distinguishing feature of the method is its ability to introduce new predicates. Predicate invention involves three steps. First, the need for a new predicate is discovered and the arguments of the new predicate are determíned using the same constraints that guide the search. In the second step, instances of the new predicate are abductively inferred. These instances form the input for the last step where the definition of the new predicate is induced by recursively applying the method again. We also outline how such a system could be more tightly integrated with an abductive learning system

    Anti-unification and Generalization: A Survey

    Full text link
    Anti-unification (AU), also known as generalization, is a fundamental operation used for inductive inference and is the dual operation to unification, an operation at the foundation of theorem proving. Interest in AU from the AI and related communities is growing, but without a systematic study of the concept, nor surveys of existing work, investigations7 often resort to developing application-specific methods that may be covered by existing approaches. We provide the first survey of AU research and its applications, together with a general framework for categorizing existing and future developments.Comment: Accepted at IJCAI 2023 - Survey Trac

    Transdisciplinarity seen through Information, Communication, Computation, (Inter-)Action and Cognition

    Full text link
    Similar to oil that acted as a basic raw material and key driving force of industrial society, information acts as a raw material and principal mover of knowledge society in the knowledge production, propagation and application. New developments in information processing and information communication technologies allow increasingly complex and accurate descriptions, representations and models, which are often multi-parameter, multi-perspective, multi-level and multidimensional. This leads to the necessity of collaborative work between different domains with corresponding specialist competences, sciences and research traditions. We present several major transdisciplinary unification projects for information and knowledge, which proceed on the descriptive, logical and the level of generative mechanisms. Parallel process of boundary crossing and transdisciplinary activity is going on in the applied domains. Technological artifacts are becoming increasingly complex and their design is strongly user-centered, which brings in not only the function and various technological qualities but also other aspects including esthetic, user experience, ethics and sustainability with social and environmental dimensions. When integrating knowledge from a variety of fields, with contributions from different groups of stakeholders, numerous challenges are met in establishing common view and common course of action. In this context, information is our environment, and informational ecology determines both epistemology and spaces for action. We present some insights into the current state of the art of transdisciplinary theory and practice of information studies and informatics. We depict different facets of transdisciplinarity as we see it from our different research fields that include information studies, computability, human-computer interaction, multi-operating-systems environments and philosophy.Comment: Chapter in a forthcoming book: Information Studies and the Quest for Transdisciplinarity - Forthcoming book in World Scientific. Mark Burgin and Wolfgang Hofkirchner, Editor
    corecore