7,516 research outputs found

    Logic-Based Analogical Reasoning and Learning

    Full text link
    Analogy-making is at the core of human intelligence and creativity with applications to such diverse tasks as commonsense reasoning, learning, language acquisition, and story telling. This paper contributes to the foundations of artificial general intelligence by developing an abstract algebraic framework for logic-based analogical reasoning and learning in the setting of logic programming. The main idea is to define analogy in terms of modularity and to derive abstract forms of concrete programs from a `known' source domain which can then be instantiated in an `unknown' target domain to obtain analogous programs. To this end, we introduce algebraic operations for syntactic program composition and concatenation and illustrate, by giving numerous examples, that programs have nice decompositions. Moreover, we show how composition gives rise to a qualitative notion of syntactic program similarity. We then argue that reasoning and learning by analogy is the task of solving analogical proportions between logic programs. Interestingly, our work suggests a close relationship between modularity, generalization, and analogy which we believe should be explored further in the future. In a broader sense, this paper is a first step towards an algebraic and mainly syntactic theory of logic-based analogical reasoning and learning in knowledge representation and reasoning systems, with potential applications to fundamental AI-problems like commonsense reasoning and computational learning and creativity

    Mill on logic

    Get PDF
    Working within the broad lines of general consensus that mark out the core features of John Stuart Mill’s (1806–1873) logic, as set forth in his A System of Logic (1843–1872), this chapter provides an introduction to Mill’s logical theory by reviewing his position on the relationship between induction and deduction, and the role of general premises and principles in reasoning. Locating induction, understood as a kind of analogical reasoning from particulars to particulars, as the basic form of inference that is both free-standing and the sole load-bearing structure in Mill’s logic, the foundations of Mill’s logical system are briefly inspected. Several naturalistic features are identified, including its subject matter, human reasoning, its empiricism, which requires that only particular, experiential claims can function as basic reasons, and its ultimate foundations in ‘spontaneous’ inference. The chapter concludes by comparing Mill’s naturalized logic to Russell’s (1907) regressive method for identifying the premises of mathematics

    Facts, skills and intuition : A typology of personal knowledge

    Get PDF
    This paper introduces a knowledge model in which the types of knowledge are formed according to the nature of knowledge. First we use Ryle’s distinction of “that” and “how” knowledge, to which we add further three types. The five knowledge types are then synthesized using Polanyi’s distinction of focal and subsidiary awareness. The resulting model distinguishes three types of knowledge, the facts, the skills, and the intuition; all three having focal and subsidiary parts. We believe that this knowledge model is comprehensive in the sense that can classify any knowledge and it also has great explanatory power, as it is demonstrated through illustrative examples. Moreover, the model is elegant and easy to use, which facilitates our understanding of the domain of personal knowledge. Therefore we expect our findings to be useful for both researchers and educators in the field of knowledge and knowledge management

    Computational and Biological Analogies for Understanding Fine-Tuned Parameters in Physics

    Full text link
    In this philosophical paper, we explore computational and biological analogies to address the fine-tuning problem in cosmology. We first clarify what it means for physical constants or initial conditions to be fine-tuned. We review important distinctions such as the dimensionless and dimensional physical constants, and the classification of constants proposed by Levy-Leblond. Then we explore how two great analogies, computational and biological, can give new insights into our problem. This paper includes a preliminary study to examine the two analogies. Importantly, analogies are both useful and fundamental cognitive tools, but can also be misused or misinterpreted. The idea that our universe might be modelled as a computational entity is analysed, and we discuss the distinction between physical laws and initial conditions using algorithmic information theory. Smolin introduced the theory of "Cosmological Natural Selection" with a biological analogy in mind. We examine an extension of this analogy involving intelligent life. We discuss if and how this extension could be legitimated. Keywords: origin of the universe, fine-tuning, physical constants, initial conditions, computational universe, biological universe, role of intelligent life, cosmological natural selection, cosmological artificial selection, artificial cosmogenesis.Comment: 25 pages, Foundations of Science, in pres

    Interpretation-driven mapping: A framework for conducting search and re-representation in parallel for computational analogy in design

    Get PDF
    This paper presents a framework for the interactions between the processes of mapping and rerepresentation within analogy making. Analogical reasoning systems for use in design tasks require representations that are open to being reinterpreted. The framework, interpretation-driven mapping, casts the process of constructing an analogical relationship as requiring iterative, parallel interactions between mapping and interpreting. This paper argues that this interpretation-driven approach focuses research on a fundamental problem in analogy making: how do the representations that make new mappings possible emerge during the mapping process? The framework is useful for both describing existing analogy-making models and designing future ones. The paper presents a computational model informed by the framework Idiom, which learns ways to reinterpret the representations of objects as it maps between them. The results of an implementation in the domain of visual analogy are presented to demonstrate its feasibility. Analogies constructed by the system are presented as examples. The interpretation-driven mapping framework is then used to compare representational change in Idiom to that in three previously published systems
    • 

    corecore