80 research outputs found

    Combining Enumeration and Deductive Techniques in order to Increase the Class of Constructible Infinite Models

    Get PDF
    AbstractA new method for building infinite models for first-order formulae is presented. The method combines enumeration techniques with existing deductive (in a broad sense) ones. Its soundness and completeness w.r.t. the class of models that can be represented by equational constraints are proven. This shows that the use of enumeration techniques strictly increases the power of existing methods for building Herbrand models that are not complete in this sense. Some strategies are proposed to reduce the search space. We give examples and show how to use this approach for building interactively a model of a formula introduced by Goldfarb in his proof of the undecidability of the Gödel class with identity. This formula is satisfiable but has no finite model

    Incompleteness via paradox and completeness

    Get PDF
    This paper explores the relationship borne by the traditional paradoxes of set theory and semantics to formal incompleteness phenomena. A central tool is the application of the Arithmetized Completeness Theorem to systems of second-order arithmetic and set theory in which various “paradoxical notions” for first-order languages can be formalized. I will first discuss the setting in which this result was originally presented by Hilbert & Bernays (1939) and also how it was later adapted by Kreisel (1950) andWang (1955) in order to obtain formal undecidability results. A generalization of this method will then be presented whereby Russell’s paradox, a variant of Mirimano’s paradox, the Liar, and the Grelling-Nelson paradox may be uniformly transformed into incompleteness theorems. Some additional observations are then framed relating these results to the unification of the set theoretic and semantic paradoxes, the intensionality of arithmetization (in the sense of Feferman, 1960), and axiomatic theories of truth

    On a notion of abduction and relevance for first-order logic clause sets

    Get PDF
    I propose techniques to help with explaining entailment and non-entailment in first-order logic respectively relying on deductive and abductive reasoning. First, given an unsatisfiable clause set, one could ask which clauses are necessary for any possible deduction (\emph{syntactically relevant}), usable for some deduction (\emph{syntactically semi-relevant}), or unusable (\emph{syntactically irrelevant}). I propose a first-order formalization of this notion and demonstrate a lifting of this notion to the explanation of an entailment w.r.t some axiom set defined in some description logic fragments. Moreover, it is accompanied by a semantic characterization via \emph{conflict literals} (contradictory simple facts). From an unsatisfiable clause set, a pair of conflict literals are always deducible. A \emph{relevant} clause is necessary to derive any conflict literal, a \emph{semi-relevant} clause is necessary to derive some conflict literal, and an \emph{irrelevant} clause is not useful in deriving any conflict literals. It helps provide a picture of why an explanation holds beyond what one can get from the predominant notion of a minimal unsatisfiable set. The need to test if a clause is (syntactically) semi-relevant leads to a generalization of a well-known resolution strategy: resolution equipped with the set-of-support strategy is refutationally complete on a clause set NN and SOS MM if and only if there is a resolution refutation from NâˆȘMN\cup M using a clause in MM. This result non-trivially improves the original formulation. Second, abductive reasoning helps find extensions of a knowledge base to obtain an entailment of some missing consequence (called observation). Not only that it is useful to repair incomplete knowledge bases but also to explain a possibly unexpected observation. I particularly focus on TBox abduction in \EL description logic (still first-order logic fragment via some model-preserving translation scheme) which is rather lightweight but prevalent in practice. The solution space can be huge or even infinite. So, different kinds of minimality notions can help sort the chaff from the grain. I argue that existing ones are insufficient, and introduce \emph{connection minimality}. This criterion offers an interpretation of Occam's razor in which hypotheses are accepted only when they help acquire the entailment without arbitrarily using axioms unrelated to the problem at hand. In addition, I provide a first-order technique to compute the connection-minimal hypotheses in a sound and complete way. The key technique relies on prime implicates. While the negation of a single prime implicate can already serve as a first-order hypothesis, a connection-minimal hypothesis which follows \EL syntactic restrictions (a set of simple concept inclusions) would require a combination of them. Termination by bounding the term depth in the prime implicates is provable by only looking into the ones that are also subset-minimal. I also present an evaluation on ontologies from the medical domain by implementing a prototype with SPASS as a prime implicate generation engine.Ich schlage Techniken vor, die bei der ErklĂ€rung von Folgerung und Nichtfolgerung in der Logik erster Ordnung helfen, die sich jeweils auf deduktives und abduktives Denken stĂŒtzen. Erstens könnte man bei einer gegebenen unerfĂŒllbaren Klauselmenge fragen, welche Klauseln fĂŒr eine mögliche Deduktion notwendig (\emph{syntaktisch relevant}), fĂŒr eine Deduktion verwendbar (\emph{syntaktisch semi-relevant}) oder unbrauchbar (\emph{syntaktisch irrelevant}). Ich schlage eine Formalisierung erster Ordnung dieses Begriffs vor und demonstriere eine Anhebung dieses Begriffs auf die ErklĂ€rung einer Folgerung bezĂŒglich einer Reihe von Axiomen, die in einigen Beschreibungslogikfragmenten definiert sind. Außerdem wird sie von einer semantischen Charakterisierung durch \emph{Konfliktliteral} (widersprĂŒchliche einfache Fakten) begleitet. Aus einer unerfĂŒllbaren Klauselmenge ist immer ein Konfliktliteralpaar ableitbar. Eine \emph{relevant}-Klausel ist notwendig, um ein Konfliktliteral abzuleiten, eine \emph{semi-relevant}-Klausel ist notwendig, um ein Konfliktliteral zu generieren, und eine \emph{irrelevant}-Klausel ist nicht nĂŒtzlich, um Konfliktliterale zu generieren. Es hilft, ein Bild davon zu vermitteln, warum eine ErklĂ€rung ĂŒber das hinausgeht, was man aus der vorherrschenden Vorstellung einer minimalen unerfĂŒllbaren Menge erhalten kann. Die Notwendigkeit zu testen, ob eine Klausel (syntaktisch) semi-relevant ist, fĂŒhrt zu einer Verallgemeinerung einer bekannten Resolutionsstrategie: Die mit der Set-of-Support-Strategie ausgestattete Resolution ist auf einer Klauselmenge NN und SOS MM widerlegungsvollstĂ€ndig, genau dann wenn es eine Auflösungswiderlegung von NâˆȘMN\cup M unter Verwendung einer Klausel in MM gibt. Dieses Ergebnis verbessert die ursprĂŒngliche Formulierung nicht trivial. Zweitens hilft abduktives Denken dabei, Erweiterungen einer Wissensbasis zu finden, um eine implikantion einer fehlenden Konsequenz (Beobachtung genannt) zu erhalten. Es ist nicht nur nĂŒtzlich, unvollstĂ€ndige Wissensbasen zu reparieren, sondern auch, um eine möglicherweise unerwartete Beobachtung zu erklĂ€ren. Ich konzentriere mich besonders auf die TBox-Abduktion in dem leichten, aber praktisch vorherrschenden Fragment der Beschreibungslogik \EL, das tatsĂ€chlich ein Logikfragment erster Ordnung ist (mittels eines modellerhaltenden Übersetzungsschemas). Der Lösungsraum kann riesig oder sogar unendlich sein. So können verschiedene Arten von MinimalitĂ€tsvorstellungen helfen, die Spreu vom Weizen zu trennen. Ich behaupte, dass die bestehenden unzureichend sind, und fĂŒhre \emph{VerbindungsminimalitĂ€t} ein. Dieses Kriterium bietet eine Interpretation von Ockhams Rasiermesser, bei der Hypothesen nur dann akzeptiert werden, wenn sie helfen, die Konsequenz zu erlangen, ohne willkĂŒrliche Axiome zu verwenden, die nichts mit dem vorliegenden Problem zu tun haben. Außerdem stelle ich eine Technik in Logik erster Ordnung zur Berechnung der verbindungsminimalen Hypothesen in zur VerfĂŒgung korrekte und vollstĂ€ndige Weise. Die SchlĂŒsseltechnik beruht auf Primimplikanten. WĂ€hrend die Negation eines einzelnen Primimplikant bereits als Hypothese in Logik erster Ordnung dienen kann, wĂŒrde eine Hypothese des Verbindungsminimums, die den syntaktischen EinschrĂ€nkungen von \EL folgt (einer Menge einfacher Konzeptinklusionen), eine Kombination dieser beiden erfordern. Die Terminierung durch Begrenzung der Termtiefe in den Primimplikanten ist beweisbar, indem nur diejenigen betrachtet werden, die auch teilmengenminimal sind. Außerdem stelle ich eine Auswertung zu Ontologien aus der Medizin vor, DomĂ€ne durch die Implementierung eines Prototyps mit SPASS als Primimplikant-Generierungs-Engine

    Parameterized analysis of complexity

    Get PDF

    Instructional strategies in explicating the discovery function of proof for lower secondary school students

    No full text
    In this paper, we report on the analysis of teaching episodes selected from our pedagogical and cognitive research on geometry teaching that illustrate how carefully-chosen instructional strategies can guide Grade 8 students to see and appreciate the discovery function of proof in geometr

    The Universality Problem

    Get PDF
    The theme of this thesis is to explore the universality problem in set theory in connection to model theory, to present some methods for finding universality results, to analyse how these methods were applied, to mention some results and to emphasise some philosophical interrogations that these aspects entail. A fundamental aspect of the universality problem is to find what determines the existence of universal objects. That means that we have to take into consideration and examine the methods that we use in proving their existence or nonexistence, the role of cardinal arithmetic, combinatorics etc. The proof methods used in the mathematical part will be mostly set-theoretic, but some methods from model theory and category theory will also be present. A graph might be the simplest, but it is also one of the most useful notions in mathematics. We show that there is a faithful functor F from the category L of linear orders to the category G of graphs that preserves model theoretic-related universality results (classes of objects having universal models in exactly the same cardinals, and also having the same universality spectrum). Trees constitute combinatorial objects and have a central role in set theory. The universality of trees is connected to the universality of linear orders, but it also seems to present more challenges, which we survey and present some results. We show that there is no embedding between an â„”2-Souslin tree and a non-special wide â„”2 tree T with no cofinal branches. Furthermore, using the notion of ascent path, we prove that the class of non-special â„”2-Souslin tree with an ω-ascent path a has maximal complexity number, 2â„”2 = â„”3. Within the general framework of the universality problem in set theory and model theory, while emphasising their approaches and their connections with regard to this topic, we examine the possibility of drawing some philosophical conclusions connected to, among others, the notions of mathematical knowledge, mathematical object and proof

    Mechanised metamathematics : an investigation of first-order logic and set theory in constructive type theory

    Get PDF
    In this thesis, we investigate several key results in the canon of metamathematics, applying the contemporary perspective of formalisation in constructive type theory and mechanisation in the Coq proof assistant. Concretely, we consider the central completeness, undecidability, and incompleteness theorems of first-order logic as well as properties of the axiom of choice and the continuum hypothesis in axiomatic set theory. Due to their fundamental role in the foundations of mathematics and their technical intricacies, these results have a long tradition in the codification as standard literature and, in more recent investigations, increasingly serve as a benchmark for computer mechanisation. With the present thesis, we continue this tradition by uniformly analysing the aforementioned cornerstones of metamathematics in the formal framework of constructive type theory. This programme offers novel insights into the constructive content of completeness, a synthetic approach to undecidability and incompleteness that largely eliminates the notorious tedium obscuring the essence of their proofs, as well as natural representations of set theory in the form of a second-order axiomatisation and of a fully type-theoretic account. The mechanisation concerning first-order logic is organised as a comprehensive Coq library open to usage and contribution by external users.In dieser Doktorarbeit werden einige SchlĂŒsselergebnisse aus dem Kanon der Metamathematik untersucht, unter Verwendung der zeitgenössischen Perspektive von Formalisierung in konstruktiver Typtheorie und Mechanisierung mit Hilfe des Beweisassistenten Coq. Konkret werden die zentralen VollstĂ€ndigkeits-, Unentscheidbarkeits- und UnvollstĂ€ndigkeitsergebnisse der Logik erster Ordnung sowie Eigenschaften des Auswahlaxioms und der Kontinuumshypothese in axiomatischer Mengenlehre betrachtet. Aufgrund ihrer fundamentalen Rolle in der Fundierung der Mathematik und ihrer technischen Schwierigkeiten, besitzen diese Ergebnisse eine lange Tradition der Kodifizierung als Standardliteratur und, besonders in jĂŒngeren Untersuchungen, eine zunehmende Bedeutung als Maßstab fĂŒr Mechanisierung mit Computern. Mit der vorliegenden Doktorarbeit wird diese Tradition fortgefĂŒhrt, indem die zuvorgenannten Grundpfeiler der Methamatematik uniform im formalen Rahmen der konstruktiven Typtheorie analysiert werden. Dieses Programm ermöglicht neue Einsichten in den konstruktiven Gehalt von VollstĂ€ndigkeit, einen synthetischen Ansatz fĂŒr Unentscheidbarkeit und UnvollstĂ€ndigkeit, der großteils den berĂŒchtigten, die Essenz der Beweise verdeckenden, technischen Aufwand eliminiert, sowie natĂŒrliche ReprĂ€sentationen von Mengentheorie in Form einer Axiomatisierung zweiter Ordnung und einer vollkommen typtheoretischen Darstellung. Die Mechanisierung zur Logik erster Ordnung ist als eine umfassende Coq-Bibliothek organisiert, die offen fĂŒr Nutzung und BeitrĂ€ge externer Anwender ist

    The Material Theory of Induction

    Get PDF
    The fundamental burden of a theory of inductive inference is to determine which are the good inductive inferences or relations of inductive support and why it is that they are so. The traditional approach is modeled on that taken in accounts of deductive inference. It seeks universally applicable schemas or rules or a single formal device, such as the probability calculus. After millennia of halting efforts, none of these approaches has been unequivocally successful and debates between approaches persist. The Material Theory of Induction identifies the source of these enduring problems in the assumption taken at the outset: that inductive inference can be accommodated by a single formal account with universal applicability. Instead, it argues that that there is no single, universally applicable formal account. Rather, each domain has an inductive logic native to it.The content of that logic and where it can be applied are determined by the facts prevailing in that domain. Paying close attention to how inductive inference is conducted in science and copiously illustrated with real-world examples, The Material Theory of Induction will initiate a new tradition in the analysis of inductive inference
    • 

    corecore