178 research outputs found

    Recovery operators, paraconsistency and duality

    Get PDF
    There are two foundational, but not fully developed, ideas in paraconsistency, namely, the duality between paraconsistent and intuitionistic paradigms, and the introduction of logical operators that express meta-logical notions in the object language. The aim of this paper is to show how these two ideas can be adequately accomplished by the Logics of Formal Inconsistency (LFIs) and by the Logics of Formal Undeterminedness (LFUs). LFIs recover the validity of the principle of explosion in a paraconsistent scenario, while LFUs recover the validity of the principle of excluded middle in a paracomplete scenario. We introduce definitions of duality between inference rules and connectives that allow comparing rules and connectives that belong to different logics. Two formal systems are studied, the logics mbC and mbD, that display the duality between paraconsistency and paracompleteness as a duality between inference rules added to a common core– in the case studied here, this common core is classical positive propositional logic (CPL + ). The logics mbC and mbD are equipped with recovery operators that restore classical logic for, respectively, consistent and determined propositions. These two logics are then combined obtaining a pair of logics of formal inconsistency and undeterminedness (LFIUs), namely, mbCD and mbCDE. The logic mbCDE exhibits some nice duality properties. Besides, it is simultaneously paraconsistent and paracomplete, and able to recover the principles of excluded middle and explosion at once. The last sections offer an algebraic account for such logics by adapting the swap-structures semantics framework of the LFIs the LFUs. This semantics highlights some subtle aspects of these logics, and allows us to prove decidability by means of finite non-deterministic matrices

    Determinism and looping in combinatory PDL

    Get PDF
    AbstractIn this paper some propositional modal logics of programs are considered, based on the system CPDL (Combinatory PDL)—an extension of PDL with proper names for states. These proper names are atomic formulae which are satisfied at exactly one state, in each model. Among other things (e.g., decidability and finite-model property results) a version of Streett's conjecture that his axioms do axiomatize the infinite repeating construct repeat is established with respect to CPDL

    On Model-Checking Higher-Order Effectful Programs (Long Version)

    Full text link
    Model-checking is one of the most powerful techniques for verifying systems and programs, which since the pioneering results by Knapik et al., Ong, and Kobayashi, is known to be applicable to functional programs with higher-order types against properties expressed by formulas of monadic second-order logic. What happens when the program in question, in addition to higher-order functions, also exhibits algebraic effects such as probabilistic choice or global store? The results in the literature range from those, mostly positive, about nondeterministic effects, to those about probabilistic effects, in the presence of which even mere reachability becomes undecidable. This work takes a fresh and general look at the problem, first of all showing that there is an elegant and natural way of viewing higher-order programs producing algebraic effects as ordinary higher-order recursion schemes. We then move on to consider effect handlers, showing that in their presence the model checking problem is bound to be undecidable in the general case, while it stays decidable when handlers have a simple syntactic form, still sufficient to capture so-called generic effects. Along the way we hint at how a general specification language could look like, this way justifying some of the results in the literature, and deriving new ones

    Do Goedel's incompleteness theorems set absolute limits on the ability of the brain to express and communicate mental concepts verifiably?

    Full text link
    Classical interpretations of Goedel's formal reasoning imply that the truth of some arithmetical propositions of any formal mathematical language, under any interpretation, is essentially unverifiable. However, a language of general, scientific, discourse cannot allow its mathematical propositions to be interpreted ambiguously. Such a language must, therefore, define mathematical truth verifiably. We consider a constructive interpretation of classical, Tarskian, truth, and of Goedel's reasoning, under which any formal system of Peano Arithmetic is verifiably complete. We show how some paradoxical concepts of Quantum mechanics can be expressed, and interpreted, naturally under a constructive definition of mathematical truth.Comment: 73 pages; this is an updated version of the NQ essay; an HTML version is available at http://alixcomsi.com/Do_Goedel_incompleteness_theorems.ht

    Mechanised metamathematics : an investigation of first-order logic and set theory in constructive type theory

    Get PDF
    In this thesis, we investigate several key results in the canon of metamathematics, applying the contemporary perspective of formalisation in constructive type theory and mechanisation in the Coq proof assistant. Concretely, we consider the central completeness, undecidability, and incompleteness theorems of first-order logic as well as properties of the axiom of choice and the continuum hypothesis in axiomatic set theory. Due to their fundamental role in the foundations of mathematics and their technical intricacies, these results have a long tradition in the codification as standard literature and, in more recent investigations, increasingly serve as a benchmark for computer mechanisation. With the present thesis, we continue this tradition by uniformly analysing the aforementioned cornerstones of metamathematics in the formal framework of constructive type theory. This programme offers novel insights into the constructive content of completeness, a synthetic approach to undecidability and incompleteness that largely eliminates the notorious tedium obscuring the essence of their proofs, as well as natural representations of set theory in the form of a second-order axiomatisation and of a fully type-theoretic account. The mechanisation concerning first-order logic is organised as a comprehensive Coq library open to usage and contribution by external users.In dieser Doktorarbeit werden einige Schlüsselergebnisse aus dem Kanon der Metamathematik untersucht, unter Verwendung der zeitgenössischen Perspektive von Formalisierung in konstruktiver Typtheorie und Mechanisierung mit Hilfe des Beweisassistenten Coq. Konkret werden die zentralen Vollständigkeits-, Unentscheidbarkeits- und Unvollständigkeitsergebnisse der Logik erster Ordnung sowie Eigenschaften des Auswahlaxioms und der Kontinuumshypothese in axiomatischer Mengenlehre betrachtet. Aufgrund ihrer fundamentalen Rolle in der Fundierung der Mathematik und ihrer technischen Schwierigkeiten, besitzen diese Ergebnisse eine lange Tradition der Kodifizierung als Standardliteratur und, besonders in jüngeren Untersuchungen, eine zunehmende Bedeutung als Maßstab für Mechanisierung mit Computern. Mit der vorliegenden Doktorarbeit wird diese Tradition fortgeführt, indem die zuvorgenannten Grundpfeiler der Methamatematik uniform im formalen Rahmen der konstruktiven Typtheorie analysiert werden. Dieses Programm ermöglicht neue Einsichten in den konstruktiven Gehalt von Vollständigkeit, einen synthetischen Ansatz für Unentscheidbarkeit und Unvollständigkeit, der großteils den berüchtigten, die Essenz der Beweise verdeckenden, technischen Aufwand eliminiert, sowie natürliche Repräsentationen von Mengentheorie in Form einer Axiomatisierung zweiter Ordnung und einer vollkommen typtheoretischen Darstellung. Die Mechanisierung zur Logik erster Ordnung ist als eine umfassende Coq-Bibliothek organisiert, die offen für Nutzung und Beiträge externer Anwender ist

    Hierarchical combination of intruder theories

    Get PDF
    International audienceRecently automated deduction tools have proved to be very effective for detecting attacks on cryptographic protocols. These analysis can be improved, for finding more subtle weaknesses, by a more accurate modelling of operators employed by protocols. Several works have shown how to handle a single algebraic operator (associated with a fixed intruder theory) or how to combine several operators satisfying disjoint theories. However several interesting equational theories, such as exponentiation with an abelian group law for exponents remain out of the scope of these techniques. This has motivated us to introduce a new notion of hierarchical combination for non-disjoint intruder theories and to show decidability results for the deduction problem in these theories. We have also shown that under natural hypotheses hierarchical intruder constraints can be decided. This result applies to an exponentiation theory that appears to be more general than the one considered before

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.

    A Simple Constraint-solving Decision Procedure for Protocols with Exclusive or

    Get PDF
    We present a procedure for deciding security of protocols employing the Exclusive or operator. This procedure relies on a direct combination of a constraint solver for security protocol with a unification algorithm for the exclusive-or theory. Hence compared to the previous ones it is much simpler and easily amenable to automation. The principle of the approach can be applied to other theories too
    • …
    corecore