66 research outputs found

    PSA 2020

    Get PDF
    These preprints were automatically compiled into a PDF from the collection of papers deposited in PhilSci-Archive in conjunction with the PSA 2020

    The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines

    A Groundwork for A Logic of Objects

    Get PDF
    The history of philosophy is rich with theories about objects; theories of object kinds, their nature, the status of their existence, etc. In recent years philosophical logicians have attempted to formalize some of these theories, yielding many fruitful results. My thesis intends to add to this tradition in philosophical logic by developing a second-order logical system that may serve as a groundwork for a multitude of theories of objects (e.g. concrete and abstract objects, impossible objects, fictional objects, and others). Through the addition of what we may call sortal quantifiers (i.e. quantifiers that bind individual variables ranging over objects of three unique sorts), a groundwork for a logic that captures concrete and non-concrete objects will be developed. We then extend this groundwork by the addition of a single new operator and the modal operators of a Priorian temporal logic. From this extension, our formal system can represent and define concrete, abstract, fictional, and impossible objects as well as formally axiomatize informal theories of them

    Decidable fragments of first-order logic and of first-order linear arithmetic with uninterpreted predicates

    Get PDF
    First-order logic is one of the most prominent formalisms in computer science and mathematics. Since there is no algorithm capable of solving its satisfiability problem, first-order logic is said to be undecidable. The classical decision problem is the quest for a delineation between the decidable and the undecidable parts. The results presented in this thesis shed more light on the boundary and open new perspectives on the landscape of known decidable fragments. In the first part we focus on the new concept of separateness of variables and explore its applicability to the classical decision problem and beyond. Two disjoint sets of first-order variables are separated in a given formula if none of its atoms contains variables from both sets. This notion facilitates the definition of decidable extensions of many well-known decidable first-order fragments. We demonstrate this for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In two cases the succinctness gap cannot be bounded using elementary functions. This fact already hints at computationally hard satisfiability problems. Indeed, we derive non-elementary lower bounds for the separated fragment, an extension of the Bernays-Schönfinkel-Ramsey fragment (E*A*-prefix sentences). On the semantic level, separateness of quantified variables may lead to weaker dependences than we encounter in general. We investigate this property in the context of model-checking games. The focus of the second part of the thesis is on linear arithmetic with uninterpreted predicates. Two novel decidable fragments are presented, both based on the Bernays-Schönfinkel-Ramsey fragment. On the negative side, we identify several small fragments of the language for which satisfiability is undecidable.Untersuchungen der Logik erster Stufe blicken auf eine lange Tradition zurĂŒck. Es ist allgemein bekannt, dass das zugehörige ErfĂŒllbarkeitsproblem im Allgemeinen nicht algorithmisch gelöst werden kann - man spricht daher von einer unentscheidbaren Logik. Diese Beobachtung wirft ein Schlaglicht auf die prinzipiellen Grenzen der FĂ€higkeiten von Computern im Allgemeinen aber auch des automatischen Schließens im Besonderen. Das Hilbertsche Entscheidungsproblem wird heute als die Erforschung der Grenze zwischen entscheidbaren und unentscheidbaren Teilen der Logik erster Stufe verstanden, wobei die untersuchten Fragmente der Logik mithilfe klar zu erfassender und berechenbarer syntaktischer Eigenschaften beschrieben werden. Viele Forscher haben bereits zu dieser Untersuchung beigetragen und zahlreiche entscheidbare und unentscheidbare Fragmente entdeckt und erforscht. Die vorliegende Dissertation setzt diese Tradition mit einer Reihe vornehmlich positiver Resultate fort und eröffnet neue Blickwinkel auf eine Reihe von Fragmenten, die im Laufe der letzten einhundert Jahre untersucht wurden. Im ersten Teil der Arbeit steht das syntaktische Konzept der Separiertheit von Variablen im Mittelpunkt, und dessen Anwendbarkeit auf das Entscheidungsproblem und darĂŒber hinaus wird erforscht. Zwei Mengen von Individuenvariablen gelten bezĂŒglich einer gegebenen Formel als separiert, falls in jedem Atom der Formel die Variablen aus höchstens einer der beiden Mengen vorkommen. Mithilfe dieses leicht verstĂ€ndlichen Begriffs lassen sich viele wohlbekannte entscheidbare Fragmente der Logik erster Stufe zu grĂ¶ĂŸeren Klassen von Formeln erweitern, die dennoch entscheidbar sind. Dieser Ansatz wird fĂŒr neun Fragmente im Detail dargelegt, darunter mehrere PrĂ€fix-Fragmente, das Zwei-Variablen-Fragment und sogenannte "guarded" und " uted" Fragmente. Dabei stellt sich heraus, dass alle erweiterten Fragmente ebenfalls das monadische Fragment erster Stufe ohne Gleichheit enthalten. Obwohl die erweiterte Syntax in den betrachteten FĂ€llen nicht mit einer erhöhten AusdrucksstĂ€rke einhergeht, können bestimmte ZusammenhĂ€nge mithilfe der erweiterten Syntax deutlich kĂŒrzer formuliert werden. Zumindest in zwei FĂ€llen ist diese Diskrepanz nicht durch eine elementare Funktion zu beschrĂ€nken. Dies liefert einen ersten Hinweis darauf, dass die algorithmische Lösung des ErfĂŒllbarkeitsproblems fĂŒr die erweiterten Fragmente mit sehr hohem Rechenaufwand verbunden ist. TatsĂ€chlich wird eine nicht-elementare untere Schranke fĂŒr den entsprechenden Zeitbedarf beim sogenannten separierten Fragment, einer Erweiterung des bekannten Bernays-Schönfinkel-Ramsey-Fragments, abgeleitet. DarĂŒber hinaus wird der Ein uss der Separiertheit von Individuenvariablen auf der semantischen Ebene untersucht, wo AbhĂ€ngigkeiten zwischen quantifizierten Variablen durch deren Separiertheit stark abgeschwĂ€cht werden können. FĂŒr die genauere formale Betrachtung solcher als schwach bezeichneten AbhĂ€ngigkeiten wird auf sogenannte Hintikka-Spiele zurĂŒckgegriffen. Den Schwerpunkt des zweiten Teils der vorliegenden Arbeit bildet das Entscheidungsproblem fĂŒr die lineare Arithmetik ĂŒber den rationalen Zahlen in Verbindung mit uninterpretierten PrĂ€dikaten. Es werden zwei bislang unbekannte entscheidbare Fragmente dieser Sprache vorgestellt, die beide auf dem Bernays-Schönfinkel-Ramsey-Fragment aufbauen. Ferner werden neue negative Resultate entwickelt und mehrere unentscheidbare Fragmente vorgestellt, die lediglich einen sehr eingeschrĂ€nkten Teil der Sprache benötigen

    Separateness of Variables -- A Novel Perspective on Decidable First-Order Fragments

    Get PDF
    The classical decision problem, as it is understood today, is the quest for a delineation between the decidable and the undecidable parts of first-order logic based on elegant syntactic criteria. In this paper, we treat the concept of separateness of variables and explore its applicability to the classical decision problem. Two disjoint sets of first-order variables are separated in a given formula if variables from the two sets never co-occur in any atom of that formula. This simple notion facilitates extending many well-known decidable first-order fragments significantly and in a way that preserves decidability. We will demonstrate that for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Altogether, we will investigate nine such extensions more closely. Interestingly, each of them contains the relational monadic first-order fragment without equality. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In three cases the succinctness gap cannot be bounded using any elementary function.Comment: 44 pages, 1 figur

    Extension polymorphism

    Get PDF
    Any system that models a real world application has to evolve to be consistent with its changing domain. Dealing with evolution in an effective manner is particularly important for those systems that may store large amounts of data such as databases and persistent languages. In persistent programming systems, one of the important issues in dealing with evolution is the specification of code that will continue to work in a type safe way despite changes to type definitions. Polymorphism is one mechanism which allows code to work over many types. Inclusion polymorphism is often said to be a model of type evolution. However, observing type changes in persistent systems has shown that types most commonly exhibit additive evolution. Even though inclusion captures this pattern in the case of record types, it does not always do so for other type constructors. The confusion of subtyping, inheritance and evolution often leads to unsound or at best, dynamically typed systems. Existing solutions to this problem do not completely address the requirements of type evolution in persistent systems. The aim of this thesis is to develop a form of polymorphism that is suitable for modelling additive evolution in persistent systems. The proposed strategy is to study patterns of evolution for the most generally used type constructors in persistent languages and to define a new relation, called extension, which models these patterns. This relation is defined independent of any existing relations used for dealing with evolution. A programming language mechanism is then devised to provide polymorphism over this relation. The polymorphism thus defined is called extension polymorphism. This thesis presents work involving the design and definition of extension polymorphism and an implementation of a type checker for this polymorphism. A proof of soundness for a type system which supports extension polymorphism is also presented

    Quine and Boolos on second-order logic : an examination of the debate

    Get PDF
    The aim of this thesis is to examine the debate between Quine and Boolos over the logical status of higher-order logic-with Quine taking the position that higher-logic is more properly understood as set theory and Boolos arguing in opposition that higher-order logic is of a genuinely logical character. My purpose here then will be to stay as neutral as possible over the question of whether or not higher-order logic counts as logic and to instead focus on the exposition of the debate itself as exemplified in the work of Quine and Boolos. Chapter I will be a detailed consideration of Quine's conception of logic and its place within the wider context of his philosophy. Only once this backdrop is in place will I then examine his views on higher-order logic. In Chapter II, I turn to Boolos's response to Quine-his attempt to examine the extent to which we may want to count higher-order logic as logic and the extent to which we may want to count it as set theory. With each point Boolos raises, I attempt to give what I think would have been Quine's reply. Finally, in Chapter III, I consider Boolos's attempt to show that monadic second-order logic (MSOL) should be understood as pure logic as it does not commit us to the existence of classes, as we may take the standard interpretation of MSOL to do. I discuss here some of the major reactions to Boolos's plural interpretation (Resnik, Parsons, and Linnebo), and conclude with more speculative remarks on what Quine's own response might have been. Throughout this thesis, my primary method has been one of close textual analysis

    Proof-theoretic investigations into integrated logical and functional programming

    Get PDF
    This thesis is a proof-theoretic investigation of logic programming based on hereditary Harrop logic (as in lambdaProlog). After studying various proof systems for the first-order hereditary Harrop logic, we define the proof-theoretic semantics of a logic LFPL, intended as the basis of logic programming with functions, which extends higher-order hereditary Harrop logic by providing definition mechanisms for functions in such a way that the logical specification of the function rather than the function may be used in proof search. In Chap. 3, we define, for the first-order hereditary Harrop fragment of LJ, the class of uniform linear focused (ULF) proofs (suitable for goal-directed search with backchaining and unification) and show that the ULF-proofs are in 1-1 correspondence with the expanded normal deductions, in Prawitz's sense. We give a system of proof-term annotations for LJ-proofs (where proof-terms uniquely represent proofs). We define a rewriting system on proof-terms (where rules represent a subset of Kleene's permutations in LJ) and show that: its irreducible proof- terms are those representing ULF-proofs; it is weakly normalising. We also show that the composition of Prawitz's mappings between LJ and NJ, restricted to ULF-proofs, is the identity. We take the view of logic programming where: a program P is a set of formulae; a goal G is a formula; and the different means of achieving G w.r.t. P correspond to the expanded normal deductions of G from the assumptions in P (rather than the traditional view, whereby the different means of goal-achievement correspond to the different answer substitutions). LFPL is defined in Chap. 4, by means of a sequent calculus. As in LeFun, it extends logic programming with functions and provides mechanisms for defining names for functions, maintaining proof search as the computation mechanism (contrary to languages such as ALF, Babel, Curry and Escher, based on equational logic, where the computation mechanism is some form of rewriting). LFPL also allows definitions for declaring logical properties of functions, called definitions of dependent type. Such definitions are of the form: (f,x) =def(A, w) : EX:RF, where f is a name for A and x is a name for w, a proof-term witnessing that the formula [A/x]F holds (i.e. A meets the specification Ex:rF). When searching for proofs, it may suffice to use the formula [A/x]F rather than A itself. We present an interpretation of LFPL into NNlambdanorm, a natural deduction system for hereditary Harrop logic with lambda-terms. The means of goal-achievement in LFPL are interpreted in NNlambdanorm essentially by cut-elimination, followed by an interpretation of cut-free sequent calculus proofs as normal deductions. We show that the use of definitions of dependent type may speed up proof search because the equivalent proofs using no such definitions may be much longer and because normalisation may be done lazily, since not all parts of the proof need to be exhibited. We sketch two methods for implementing LFPL, based on goal-directed proof search, differing in the mechanism for selecting definitions of dependent type on which to backchain. We discuss techniques for handling the redundancy arising from the equivalence of each proof using such a definition to one using no such definitions

    Synchronization and Control of Quantitative Systems

    Get PDF
    • 

    corecore