1,945 research outputs found
Computational reverse mathematics and foundational analysis
Reverse mathematics studies which subsystems of second order arithmetic are
equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main
philosophical application of reverse mathematics proposed thus far is
foundational analysis, which explores the limits of different foundations for
mathematics in a formally precise manner. This paper gives a detailed account
of the motivations and methodology of foundational analysis, which have
heretofore been largely left implicit in the practice. It then shows how this
account can be fruitfully applied in the evaluation of major foundational
approaches by a careful examination of two case studies: a partial realization
of Hilbert's program due to Simpson [1988], and predicativism in the extended
form due to Feferman and Sch\"{u}tte.
Shore [2010, 2013] proposes that equivalences in reverse mathematics be
proved in the same way as inequivalences, namely by considering only
-models of the systems in question. Shore refers to this approach as
computational reverse mathematics. This paper shows that despite some
attractive features, computational reverse mathematics is inappropriate for
foundational analysis, for two major reasons. Firstly, the computable
entailment relation employed in computational reverse mathematics does not
preserve justification for the foundational programs above. Secondly,
computable entailment is a complete relation, and hence employing it
commits one to theoretical resources which outstrip those available within any
foundational approach that is proof-theoretically weaker than
.Comment: Submitted. 41 page
The Strength of Abstraction with Predicative Comprehension
Frege's theorem says that second-order Peano arithmetic is interpretable in
Hume's Principle and full impredicative comprehension. Hume's Principle is one
example of an abstraction principle, while another paradigmatic example is
Basic Law V from Frege's Grundgesetze. In this paper we study the strength of
abstraction principles in the presence of predicative restrictions on the
comprehension schema, and in particular we study a predicative Fregean theory
which contains all the abstraction principles whose underlying equivalence
relations can be proven to be equivalence relations in a weak background
second-order logic. We show that this predicative Fregean theory interprets
second-order Peano arithmetic.Comment: Forthcoming in Bulletin of Symbolic Logic. Slight change in title
from previous version, at request of referee
On the mathematical and foundational significance of the uncountable
We study the logical and computational properties of basic theorems of
uncountable mathematics, including the Cousin and Lindel\"of lemma published in
1895 and 1903. Historically, these lemmas were among the first formulations of
open-cover compactness and the Lindel\"of property, respectively. These notions
are of great conceptual importance: the former is commonly viewed as a way of
treating uncountable sets like e.g. as 'almost finite', while the
latter allows one to treat uncountable sets like e.g. as 'almost
countable'. This reduction of the uncountable to the finite/countable turns out
to have a considerable logical and computational cost: we show that the
aforementioned lemmas, and many related theorems, are extremely hard to prove,
while the associated sub-covers are extremely hard to compute. Indeed, in terms
of the standard scale (based on comprehension axioms), a proof of these lemmas
requires at least the full extent of second-order arithmetic, a system
originating from Hilbert-Bernays' Grundlagen der Mathematik. This observation
has far-reaching implications for the Grundlagen's spiritual successor, the
program of Reverse Mathematics, and the associated G\"odel hierachy. We also
show that the Cousin lemma is essential for the development of the gauge
integral, a generalisation of the Lebesgue and improper Riemann integrals that
also uniquely provides a direct formalisation of Feynman's path integral.Comment: 35 pages with one figure. The content of this version extends the
published version in that Sections 3.3.4 and 3.4 below are new. Small
corrections/additions have also been made to reflect new development
Hypatia's silence. Truth, justification, and entitlement.
Hartry Field distinguished two concepts of type-free truth: scientific truth and disquotational truth. We argue that scientific type-free truth cannot do justificatory work in the foundations of mathematics. We also present an argument, based on Crispin Wright's theory of cognitive projects and entitlement, that disquotational truth can do justificatory work in the foundations of mathematics. The price to pay for this is that the concept of disquotational truth requires non-classical logical treatment
Predicativity and parametric polymorphism of Brouwerian implication
A common objection to the definition of intuitionistic implication in the
Proof Interpretation is that it is impredicative. I discuss the history of that
objection, argue that in Brouwer's writings predicativity of implication is
ensured through parametric polymorphism of functions on species, and compare
this construal with the alternative approaches to predicative implication of
Goodman, Dummett, Prawitz, and Martin-L\"of.Comment: Added further references (Pistone, Poincar\'e, Tabatabai, Van Atten
Logicism, Ontology, and the Epistemology of Second-Order Logic
In two recent papers, Bob Hale has attempted to free second-order logic of the 'staggering existential assumptions' with which Quine famously attempted to saddle it. I argue, first, that the ontological issue is at best secondary: the crucial issue about second-order logic, at least for a neo-logicist, is epistemological. I then argue that neither Crispin Wright's attempt to characterize a `neutralist' conception of quantification that is wholly independent of existential commitment, nor Hale's attempt to characterize the second-order domain in terms of definability, can serve a neo-logicist's purposes. The problem, in both cases, is similar: neither Wright nor Hale is sufficiently sensitive to the demands that impredicativity imposes. Finally, I defend my own earlier attempt to finesse this issue, in "A Logic for Frege's Theorem", from Hale's criticisms
Potential infinity, abstraction principles and arithmetic (Leniewski Style)
This paper starts with an explanation of how the logicist research program can be approached within the framework of Leśniewski’s systems. One nice feature of the system is that Hume’s Principle is derivable in it from an explicit definition of natural numbers. I generalize this result to show that all predicative abstraction principles corresponding to second-level relations, which are provably equivalence relations, are provable. However, the system fails, despite being much neater than the construction of Principia Mathematica (PM). One of the key reasons is that, just as in the case of the system of PM, without the assumption that infinitely many objects exist, (renderings of) most of the standard axioms of Peano Arithmetic are not derivable in the system. I prove that introducing modal quantifiers meant to capture the intuitions behind potential infinity results in the (renderings of) axioms of Peano Arithmetic (PA) being valid in all relational models (i.e. Kripke-style models, to be defined later on) of the extended language. The second, historical part of the paper contains a user-friendly description of Leśniewski’s own arithmetic and a brief investigation into its properties
- …