16 research outputs found
Proof search issues in some non-classical logics
This thesis develops techniques and ideas on proof search. Proof search is used with one of two meanings. Proof search can be thought of either as the search for a yes/no answer to a query (theorem proving), or as the search for all proofs of a formula (proof enumeration). This thesis is an investigation into issues in proof search in both these senses for some non-classical logics.
Gentzen systems are well suited for use in proof search in both senses. The rules of Gentzen sequent calculi are such that implementations can be directed by the top level syntax of sequents, unlike other logical calculi such as natural deduction. All the calculi for proof search in this thesis are Gentzen sequent calculi.
In Chapter 2, permutation of inference rules for Intuitionistic Linear Logic is studied. A focusing calculus, ILLF, in the style of Andreoli ([And92]) is developed.This calculus allows only one proof in each equivalence class of proofs equivalent up to permutations of inferences. The issue here is both theorem proving and proof enumeration.
For certain logics, normal natural deductions provide a proof-theoretic semantics. Proof enumeration is then the enumeration of all these deductions. Herbelin’s cutfree LJT ([Her95], here called MJ) is a Gentzen system for intuitionistic logic allowing derivations that correspond in a 1–1 way to the normal natural deductions of intuitionistic logic. This calculus is therefore well suited to proof enumeration. Such calculi are called ‘permutation-free’ calculi. In Chapter 3, MJ is extended to a calculus for an intuitionistic modal logic (due to Curry) called Lax Logic. We call this calculus PFLAX. The proof theory of MJ is extended to PFLAX.
Chapter 4 presents work on theorem proving for propositional logics using a history mechanism for loop-checking. This mechanism is a refinement of one developed by Heuerding et al ([HSZ96]). It is applied to two calculi for intuitionistic logic and also to two modal logics: Lax Logic and intuitionistic S4. The calculi for intuitionistic logic are compared both theoretically and experimentally with other decision procedures for the logic.
Chapter 5 is a short investigation of embedding intuitionistic logic in Intuitionistic Linear Logic. A new embedding of intuitionistic logic in Intuitionistic Linear Logic is given. For the hereditary Harrop fragment of intuitionistic logic, this embedding induces the calculus MJ for intuitionistic logic.
In Chapter 6 a ‘permutation-free’ calculus is given for Intuitionistic Linear Logic. Again, its proof-theoretic properties are investigated. The calculus is proved to besound and complete with respect to a proof-theoretic semantics and (weak) cutelimination is proved.
Logic programming can be thought of as proof enumeration in constructive logics. All the proof enumeration calculi in this thesis have been developed with logic programming in mind. We discuss at the appropriate points the relationship between the calculi developed here and logic programming.
Appendix A contains presentations of the logical calculi used and Appendix B contains the sets of benchmark formulae used in Chapter
Proof Search Issues in Some Non-Classical Logics
This thesis develops techniques and ideas on proof search. Proof search is used with one of two meanings. Proof search can be thought of either as the search for a yes/no answer to a query (theorem proving), or as the search for all proofs of a formula (proof enumeration). This thesis is an investigation into issues in proof search in both these senses for some non-classical logics. Gentzen systems are well suited for use in proof search in both senses. The rules of Gentzen sequent calculi are such that implementations can be directed by the top level syntax of sequents, unlike other logical calculi such as natural deduction. All the calculi for proof search in this thesis are Gentzen sequent calculi. In Chapter 2, permutation of inference rules for Intuitionistic Linear Logic is studied. A focusing calculus, ILLF, in the style of Andreoli (citeandreoli-92) is developed. This calculus allows only one proof in each equivalence class of proofs equivalent up to permutations of inferences. The issue here is both theorem proving and proof enumeration. For certain logics, normal natural deductions provide a proof-theoretic semantics. Proof enumeration is then the enumeration of all these deductions. Herbelin's cut-free LJT (citeherb-95, here called MJ) is a Gentzen system for intuitionistic logic allowing derivations that correspond in a 1--1 way to the normal natural deductions of intuitionistic logic. This calculus is therefore well suited to proof enumeration. Such calculi are called `permutation-free' calculi. In Chapter 3, MJ is extended to a calculus for an intuitionistic modal logic (due to Curry) called Lax Logic. We call this calculus PFLAX. The proof theory of MJ is extended to PFLAX. Chapter 4 presents work on theorem proving for propositional logics using a history mechanism for loop-checking. This mechanism is a refinement of one developed by Heuerding emphet al (citeheu-sey-zim-96). It is applied to two calculi for intuitionistic logic and also to two modal logics: Lax Logic and intuitionistic S4. The calculi for intuitionistic logic are compared both theoretically and experimentally with other decision procedures for the logic. Chapter 5 is a short investigation of embedding intuitionistic logic in Intuitionistic Linear Logic. A new embedding of intuitionistic logic in Intuitionistic Linear Logic is given. For the hereditary Harrop fragment of intuitionistic logic, this embedding induces the calculus MJ for intuitionistic logic. In Chapter 6 a `permutation-free' calculus is given for Intuitionistic Linear Logic. Again, its proof-theoretic properties are investigated. The calculus is proved to be sound and complete with respect to a proof-theoretic semantics and (weak) cut-elimination is proved. Logic programming can be thought of as proof enumeration in constructive logics. All the proof enumeration calculi in this thesis have been developed with logic programming in mind. We discuss at the appropriate points the relationship between the calculi developed here and logic programming. Appendix A contains presentations of the logical calculi used and Appendix B contains the sets of benchmark formulae used in Chapter 4
Linear Logic and Noncommutativity in the Calculus of Structures
In this thesis I study several deductive systems for linear logic, its fragments, and some noncommutative extensions. All systems will be designed within the calculus of structures, which is a proof theoretical formalism for specifying logical systems, in the tradition of Hilbert's formalism, natural deduction, and the sequent calculus. Systems in the calculus of structures are based on two simple principles: deep inference and top-down symmetry. Together they have remarkable consequences for the properties of the logical systems. For example, for linear logic it is possible to design a deductive system, in which all rules are local. In particular, the contraction rule is reduced to an atomic version, and there is no global promotion rule. I will also show an extension of multiplicative exponential linear logic by a noncommutative, self-dual connective which is not representable in the sequent calculus. All systems enjoy the cut elimination property. Moreover, this can be proved independently from the sequent calculus via techniques that are based on the new top-down symmetry. Furthermore, for all systems, I will present several decomposition theorems which constitute a new type of normal form for derivations
Randomization does not help much, comparability does
Following Fisher, it is widely believed that randomization "relieves the
experimenter from the anxiety of considering innumerable causes by which the
data may be disturbed." In particular, it is said to control for known and
unknown nuisance factors that may considerably challenge the validity of a
result. Looking for quantitative advice, we study a number of straightforward,
mathematically simple models. However, they all demonstrate that the optimism
with respect to randomization is wishful thinking rather than based on fact. In
small to medium-sized samples, random allocation of units to treatments
typically yields a considerable imbalance between the groups, i.e., confounding
due to randomization is the rule rather than the exception.
In the second part of this contribution, we extend the reasoning to a number
of traditional arguments for and against randomization. This discussion is
rather non-technical, and at times even "foundational" (Frequentist vs.
Bayesian). However, its result turns out to be quite similar. While
randomization's contribution remains questionable, comparability contributes
much to a compelling conclusion. Summing up, classical experimentation based on
sound background theory and the systematic construction of exchangeable groups
seems to be advisable
Logiche sottostrutturali e calcolo di Lambek
Tese arquivada ao abrigo da Portaria nº 227/2017 de 25 de JulhoIn questa tesi si ha l’obiettivo di dare una breve idea del problema delle
logiche sottostrutturali e gli strumenti per poterne parlare.
Nella prima parte del primo capitolo introdurremo il calcolo delle sequenze
per la logica classica, vedremo le sue regole e il suo formalismo, così da avere
gli strumenti essenziali per poter entrare nel vivo del discorso. Nella seconda
parte, invece, approccieremo da un punto di vista storico il problema delle
logiche sottostrutturali, individuandone le origini e ripercorrendo, a grandi
linee, la storia.
Nel secondo capitolo tratteremo in maniera sistematica le regole strutturali.
Inizialmente ci interrogheremo sul loro ruolo nella definizione delle
caratteristiche di una logica, e del loro rapporto con le regole operazionali.
Successivamente ci concentreremo su quali benefici (o meno) apportano al
calcolo.
Nel terzo capitolo faremo una carrellata di logiche sottostrutturali, e
vedremo come i calcoli cambiano al variare dei postulati. Inoltre, vedremo
come possiamo leggere le sequenze.
Nel quarto capitolo vedremo metodi alternativi per reintrodurre le regole
strutturali nel nostro calcolo. Infatti, se da una parte abbiamo buoni motivi
per eliminarle, dall’altro è innegabile che esse aumentano il potere deduttivo
del calcolo. Cercheremo quindi di reintrodurle ad un livello più alto. Considereremo
soltanto quattro metodologie, in letteratura ne possiamo trovare
molte altre, ma sicuramente, le più utilizzate sono le ipersequenze e i display
calculi.
Nel quinto e nel sesto capitolo ci occuperemo dell’algebra delle logiche sottostrutturali.
In particolare, nel quinto capitolo forniremo tutti gli strumenti
necessari a parlare della semantica algebrica (argomento del sesto capitolo).
Partiremo dando, nella prima parte, le definizioni preliminari che serviranno
a definire gli -reticoli, che saranno la struttura algebrica base, a partire dalla quale costruiremo i modelli algebrici per tutte le logiche sottostrutturali.
Nell’ultimo capitolo ci concentreremo sul nostro case study: il calcolo di
Lambek. Vedremo prima la variante originale e, successivamente, vedremo
come estenderlo in quello che è noto come free Lambek’s calculus, ideato da
Ono.
Come già detto, questo lavoro rappresenta una breve e semplice introduzione
ad una problematica che, negli ultimi venti anni (e forse più), è diventata
uno degli argomenti più discussi della logica contemporanea. Sono infatti
molti i logici che si dedicano allo studio di questa particolare classe di logiche.
Le logiche sottostrutturali, oltre ad essere interessanti da un punto di
vista filosofico e proof-teoretico, hanno molte applicazioni nell’algebra e, in
particolare, con i calcoli di Lambek meglio, con la loro estensione, si è andata
sviluppando un particolare settore della logica: l’algebraic proof theory. I
padri di questa disciplina di confine sono A. Ciabattoni, N. Galatos e K.
Terui
Learning from samples using coherent lower previsions
Het hoofdonderwerp van dit werk is het afleiden, voorstellen en bestuderen van voorspellende en parametrische gevolgtrekkingsmodellen die gebaseerd zijn op de theorie van coherente onderprevisies. Een belangrijk nevenonderwerp is het vinden en bespreken van extreme onderwaarschijnlijkheden.
In het hoofdstuk ‘Modeling uncertainty’ geef ik een inleidend overzicht van de theorie van coherente onderprevisies ─ ook wel theorie van imprecieze waarschijnlijkheden genoemd ─ en de ideeën waarop ze gestoeld is. Deze theorie stelt ons in staat onzekerheid expressiever ─ en voorzichtiger ─ te beschrijven. Dit overzicht is origineel in de zin dat ze meer dan andere inleidingen vertrekt van de intuitieve theorie van coherente verzamelingen van begeerlijke gokken.
Ik toon in het hoofdstuk ‘Extreme lower probabilities’ hoe we de meest extreme vormen van onzekerheid kunnen vinden die gemodelleerd kunnen worden met onderwaarschijnlijkheden. Elke andere onzekerheidstoestand beschrijfbaar met onderwaarschijnlijkheden kan geformuleerd worden in termen van deze extreme modellen. Het belang van de door mij bekomen en uitgebreid besproken resultaten in dit domein is voorlopig voornamelijk theoretisch.
Het hoofdstuk ‘Inference models’ behandelt leren uit monsters komende uit een eindige, categorische verzameling. De belangrijkste basisveronderstelling die ik maak is dat het bemonsteringsproces omwisselbaar is, waarvoor ik een nieuwe definitie geef in termen van begeerlijke gokken. Mijn onderzoek naar de gevolgen van deze veronderstelling leidt ons naar enkele belangrijke representatiestellingen: onzekerheid over (on)eindige rijen monsters kan gemodelleerd worden in termen van categorie-aantallen (-frequenties). Ik bouw hier op voort om voor twee populaire gevolgtrekkingsmodellen voor categorische data ─ het voorspellende imprecies Dirichlet-multinomiaalmodel en het parametrische imprecies Dirichletmodel ─ een verhelderende afleiding te geven, louter vertrekkende van enkele grondbeginselen; deze modellen pas ik toe op speltheorie en het leren van Markov-ketens.
In het laatste hoofdstuk, ‘Inference models for exponential families’, verbreed ik de blik tot niet-categorische exponentiële-familie-bemonsteringsmodellen; voorbeelden zijn normale bemonstering en Poisson-bemonstering. Eerst onderwerp ik de exponentiële families en de aanverwante toegevoegde parametrische en voorspellende previsies aan een grondig onderzoek. Deze aanverwante previsies worden gebruikt in de klassieke Bayesiaanse gevolgtrekkingsmodellen gebaseerd op toegevoegd updaten. Ze dienen als grondslag voor de nieuwe, door mij voorgestelde imprecieze-waarschijnlijkheidsgevolgtrekkingsmodellen. In vergelijking met de klassieke Bayesiaanse aanpak, laat de mijne toe om voorzichtiger te zijn bij de beschrijving van onze kennis over het bemonsteringsmodel; deze voorzichtigheid wordt weerspiegeld door het op deze modellen gebaseerd gedrag (getrokken besluiten, gemaakte voorspellingen, genomen beslissingen). Ik toon ten slotte hoe de voorgestelde gevolgtrekkingsmodellen gebruikt kunnen worden voor classificatie door de naïeve credale classificator.This thesis's main subject is deriving, proposing, and studying predictive and parametric inference models that are based on the theory of coherent lower previsions. One important side subject also appears: obtaining and discussing extreme lower probabilities.
In the chapter ‘Modeling uncertainty’, I give an introductory overview of the theory of coherent lower previsions ─ also called the theory of imprecise probabilities ─ and its underlying ideas. This theory allows us to give a more expressive ─ and a more cautious ─ description of uncertainty. This overview is original in the sense that ─ more than other introductions ─ it is based on the intuitive theory of coherent sets of desirable gambles.
I show in the chapter ‘Extreme lower probabilities’ how to obtain the most extreme forms of uncertainty that can be modeled using lower probabilities. Every other state of uncertainty describable by lower probabilities can be formulated in terms of these extreme ones.
The importance of the results in this area obtained and extensively discussed by me is currently mostly theoretical.
The chapter ‘Inference models’ treats learning from samples from a finite, categorical space. My most basic assumption about the sampling process is that it is exchangeable, for which I give a novel definition in terms of desirable gambles. My investigation of the consequences of this assumption leads us to some important representation theorems: uncertainty about (in)finite sample sequences can be modeled entirely in terms of category counts (frequencies). I build on this to give an elucidating derivation from first principles for two popular inference models for categorical data ─ the predictive imprecise Dirichlet-multinomial model and the parametric imprecise Dirichlet model; I apply these models to game theory and learning Markov chains.
In the last chapter, ‘Inference models for exponential families’, I enlarge the scope to exponential family sampling models; examples are normal sampling and Poisson sampling. I first thoroughly investigate exponential families and the related conjugate parametric and predictive previsions used in classical Bayesian inference models based on conjugate updating. These previsions serve as a basis for the new imprecise-probabilistic inference models I propose. Compared to the classical Bayesian approach, mine allows to be much more cautious when trying to express what we know about the sampling model; this caution is reflected in behavior (conclusions drawn, predictions made, decisions made) based on these models. Lastly, I show how the proposed inference models can be used for classification with the naive credal classifier
Historical and Conceptual Foundations of Information Physics
The main objective of this dissertation is to philosophically assess how the use of informational concepts in the field of classical thermostatistical physics has historically evolved from the late 1940s to the present day. I will first analyze in depth the main notions that form the conceptual basis on which 'informational physics' historically unfolded, encompassing (i) different entropy, probability and information notions, (ii) their multiple interpretative variations, and (iii) the formal, numerical and semantic-interpretative relationships among them. In the following, I will assess the history of informational thermophysics during the second half of the twentieth century. Firstly, I analyse the intellectual factors that gave rise to this current in the late forties (i.e., popularization of Shannon's theory, interest in a naturalized epistemology of science, etc.), then study its consolidation in the Brillouinian and Jaynesian programs, and finally claim how Carnap (1977) and his disciples tried to criticize this tendency within the scientific community.
Then, I evaluate how informational physics became a predominant intellectual current in the scientific community in the nineties, made possible by the convergence of Jaynesianism and Brillouinism in proposals such as that of Tribus and McIrvine (1971) or Bekenstein (1973) and the application of algorithmic information theory into the thermophysical domain. As a sign of its radicality at this historical stage, I explore the main proposals to include information as part of our physical reality, such as Wheeler’s (1990), Stonier’s (1990) or Landauer’s (1991), detailing the main philosophical arguments (e.g., Timpson, 2013; Lombardi et al. 2016a) against those inflationary attitudes towards information. Following this historical assessment, I systematically analyze whether the descriptive exploitation of informational concepts has historically contributed to providing us with knowledge of thermophysical reality via (i) explaining thermal processes such as equilibrium approximation, (ii) advantageously predicting thermal phenomena, or (iii) enabling understanding of thermal property such as thermodynamic entropy. I argue that these epistemic shortcomings would make it impossible to draw ontological conclusions in a justified way about the physical nature of information. In conclusion, I will argue that the historical exploitation of informational concepts has not contributed significantly to the epistemic progress of thermophysics. This would lead to characterize informational proposals as 'degenerate science' (à la Lakatos 1978a) regarding classical thermostatistical physics or as theoretically underdeveloped regarding the study of the cognitive dynamics of scientists in this physical domain