12 research outputs found

    The strength of the tree theorem for pairs in reverse mathematics

    Get PDF
    International audienceNo natural principle is currently known to be strictly between the arithmetic comprehension axiom (ACA0) and Ramsey's theorem for pairs (RT 2 2) in reverse mathematics. The tree theorem for pairs (TT 2 2) is however a good candidate. The tree theorem states that for every finite coloring over tuples of comparable nodes in the full binary tree, there is a monochromatic subtree isomorphic to the full tree. The principle TT 2 2 is known to lie between ACA0 and RT 2 2 over RCA0, but its exact strength remains open. In this paper, we prove that RT 2 2 together with weak König's lemma (WKL0) does not imply TT 2 2 , thereby answering a question of Montálban. This separation is a case in point of the method of Lerman, Solomon and Towsner for designing a computability-theoretic property which discriminates between two statements in reverse mathematics. We therefore put the emphasis on the different steps leading to this separation in order to serve as a tutorial for separating principles in reverse mathematics

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Structural and Topological Graph Theory and Well-Quasi-Ordering

    Get PDF
    Στη σειρά εργασιών Ελασσόνων Γραφημάτων, οι Neil Robertson και Paul Seymour μεταξύ άλλων σπουδαίων αποτελεσμάτων, απέδειξαν την εικασία του Wagner που σήμερα είναι γνωστή ως το Θεώρημα των Robertson και Seymour. Σε κάθε τους βήμα προς την συναγωγή της τελικής απόδειξης της εικασίας, κάθε ειδική περίπτωση αυτής που αποδείκνυαν ήταν συνέπεια ενός "δομικού θεωρήματος" το οποίο σε γενικές γραμμές ισχυριζόταν ότι ικανοποιητικά γενικά γραφήματα περιέχουν ως ελάσσονα γραφήματα ή άλλες δομές που είναι χρήσιμα για την απόδειξη, ή ισοδύναμα, ότι η δομή των γραφημάτων τα οποία δεν περιέχουν ένα χρήσιμο για την απόδειξη γράφημα ως έλασσον είναι κατά κάποιο τρόπο περιορισμένη συνάγοντας έτσι και πάλι μια χρήσιμη πληροφορία για την απόδειξη. Στην παρούσα εργασία, παρουσιάζουμε -σχετικά μικρές- αποδείξεις διαφόρων ειδικών περιπτώσεων του Θεωρήματος των Robertson και Seymour, αναδεικνύοντας με αυτό τον τρόπο την αλληλεπίδραση της δομικής θεωρίας γραφημάτων με την θεωρία των καλών-σχεδόν-διατάξεων. Παρουσιάζουμε ακόμα την ίσως πιο ενδιαφέρουσα ειδική περίπτωση του Θεωρήματος των Robertson και Seymour, η οποία ισχυρίζεται ότι η εμβαπτισιμότητα σε κάθε συγκεκριμένη επιφάνεια δύναται να χαρακτηριστεί μέσω της απαγόρευσης πεπερασμένων το πλήθος γραφημάτων ως ελάσσονα. Το τελευταίο αποτέλεσμα συνάγεται ως ένα αποτέλεσμα της θεωρίας των καλών-σχεδόν-διατάξεων αναδεικνύοντας με αυτό τον τρόπο την αλληλεπίδρασή της με την τοπολογική θεωρία γραφημάτων. Τέλος, σταχυολογούμε αποτελέσματα αναφορικά με την καλή-σχεδόν-διάταξη κλάσεων γραφημάτων από άλλες -πέραν της σχέσης έλασσον- σχέσεις γραφημάτων.In their Graph Minors series, Neil Robertson and Paul Seymour among other great results proved Wagner's conjecture which is today known as the Robertson and Seymour's theorem. In every step along their way to the final proof, each special case of the conjecture which they were proving was a consequence of a "structure theorem", that sufficiently general graphs contain minors or other sub-objects that are useful for the proof - or equivalently, that graphs that do not contain a useful minor have a certain restricted structure, deducing that way also a useful information for the proof. The main object of this thesis is the presentation of -relatively short- proofs of several Robertson and Seymour's theorem's special cases, illustrating by this way the interplay between structural graph theory and graphs' well-quasi-ordering. We present also the proof of the perhaps most important special case of the Robertson and Seymour's theorem which states that embeddability in any fixed surface can be characterized by forbidding finitely many minors. The later result is deduced as a well-quasi-ordering result, indicating by this way the interplay among topological graph theory and well-quasi-ordering theory. Finally, we survey results regarding the well-quasi-ordering of graphs by other than the minor graphs' relations

    Topological Complexity of Sets Defined by Automata and Formulas

    Get PDF
    In this thesis we consider languages of infinite words or trees defined by automata of various types or formulas of various logics. We ask about the highest possible position in the Borel or the projective hierarchy inhabited by sets defined in a given formalism. The answer to this question is called the topological complexity of the formalism.It is shown that the topological complexity of Monadic Second Order Logic extended with the unbounding quantifier (introduced by Bojańczyk to express some asymptotic properties) over ω-words is the whole projective hierarchy. We also give the exact topological complexities of related classes of languages recognized by nondeterministic ωB-, ωS- and ωBS-automata studied by Bojańczyk and Colcombet, and a lower complexity bound for an alternating variant of ωBS-automata.We present the series of results concerning bi-unambiguous languages of infinite trees, i.e. languages recognized by unambiguous parity tree automata whose complements are also recognized by unambiguous parity automata. We give an example of a bi-unambiguous tree language G that is analytic-complete. We present an operation σ on tree languages with the property that σ(L) is topologically harder than any language in the sigma-algebra generated by the languages continuously reducible to L. If the operation is applied to a bi-unambiguous language than the result is also bi-unambiguous. We then show that the application of the operation can be iterated to obtain harder and harder languages. We also define another operation that enables a limit step iteration. Using the operations we are able to construct a sequence of bi-unambiguous languages of increasing topological complexity, of length at least ω square.W niniejszej rozprawie rozważane są języki nieskończonych słów lub drzew definiowane poprzez automaty różnych typów lub formuły różnych logik. Pytamy o najwyższą możliwą pozycję w hierarchii borelowskiej lub rzutowej zajmowaną przez zbiory definiowane w danym formalizmie. Odpowiedź na to pytanie jest nazywana złożonością topologiczną formalizmu.Przedstawiony został dowód, że złożonością topologiczną Logiki Monadycznej Drugiego Rzędu rozszerzonej o kwantyfikator Unbounding (wprowadzony przez Bojańczyka w celu umożliwienia wyrażania własności asymptotycznych) na słowach nieskończonych jest cała hierarchia rzutowa. Obliczone zostały również złożoności topologiczne klas języków rozpoznawanych przez niedeterministyczne ωB-, ωS- i ωBS-automaty rozważane przez Bojańczyka i Colcombet'a, oraz zostało podane dolne ograniczenie złożoności wariantu alternującego ωBS-automatów.Zaprezentowane zostały wyniki dotyczące języków podwójnie jednoznacznych, tzn. języków rozpoznawanych przez jednoznaczne automaty parzystości na drzewach, których dopełnienia również są rozpoznawane przez jednoznaczne automaty parzystości. Podany został przykład podwójnie jednoznacznego języka drzew G, który jest analityczny-zupełny. Została wprowadzona operacja σ na językach drzew taka, że język σ(L) jest topologicznie bardziej złożony niż jakikolwiek język należący do sigma-algebry generowanej przez języki redukujące się w sposób ciągły do języka L. W wyniku zastosowania powyższej operacji do języka podwójnie jednoznacznego otrzymujemy język podwójnie jednoznaczny. Zostało pokazane, że kolejne iteracje aplikacji powyższej operacji dają coraz bardziej złożone języki. Została również wprowadzona druga operacja, która umożliwia krok graniczny iteracji. Używając obydwu powyższych operacji można skonstruować ciąg długości ω kwadrat złożony z języków podwójnie jednoznacznych o coraz większej złożoności

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Ins and outs of Russell's theory of types

    Get PDF
    The thesis examines A.N. Whitehead and B. Russell’s Ramified Theory of Types (RTT). It consists of three parts. The first part is devoted to understanding the source of impredicativity implicit in the induction principle. The question I raise here is whether second-order explicit definitions are responsible for cases when impredicativity turns pathological. The second part considers the interplay between the vicious-circle principle and the no-class theory. The main goal is to give an explanation for the predicative restrictions entailed by the vicious-circle principle. The explanation is that set-existence is parasitic upon prior predicative specifications. The justification for this claim is given by employing the method of hierarchy of languages. Supposing the natural number structure and the language of Peano Arithmetic (PA) as given, I describe the construction of a set-theoretic language equipped with substitutionally interpreted quantifiers ranging over arithmetically definable sets. The third part considers the proposition-theoretic version of Russell’s antinomy. A solution to this paradox is offered on the basis of the ramified hierarchy propositions

    Computability in constructive type theory

    Get PDF
    We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Rice’s theorem, the Myhill isomorphism theorem, and the existence of Post’s simple and hypersimple predicates relying on no other axioms such as Markov’s principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type ℕ → ℕ is L-computable.Wir behandeln eine formalisierte und maschinengeprüfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom für synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Post’s simplen und hypersimplen Prädikaten ohne Annahme von anderen Axiomen wie Markov’s Prinzip oder Auswahlaxiomen. Als zweiten Schritt führen wir Berechnungsmodelle ein. Wir geben einen kompakten Überblick über die Definition von verschiedenen Berechnungsmodellen und erklären maschinengeprüfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprüfte Unentscheidbarkeitsbeweise erlaubt. Wir erklären solche Beweise für die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-Kalkül L als sweet spot für die Programmierung in einem Berechnungsmodell. Wir führen ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ N→N L-berechenbar ist

    Safe data structure visualisation

    Get PDF
    corecore