7 research outputs found

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Making Presentation Math Computable

    Get PDF
    This Open-Access-book addresses the issue of translating mathematical expressions from LaTeX to the syntax of Computer Algebra Systems (CAS). Over the past decades, especially in the domain of Sciences, Technology, Engineering, and Mathematics (STEM), LaTeX has become the de-facto standard to typeset mathematical formulae in publications. Since scientists are generally required to publish their work, LaTeX has become an integral part of today's publishing workflow. On the other hand, modern research increasingly relies on CAS to simplify, manipulate, compute, and visualize mathematics. However, existing LaTeX import functions in CAS are limited to simple arithmetic expressions and are, therefore, insufficient for most use cases. Consequently, the workflow of experimenting and publishing in the Sciences often includes time-consuming and error-prone manual conversions between presentational LaTeX and computational CAS formats. To address the lack of a reliable and comprehensive translation tool between LaTeX and CAS, this thesis makes the following three contributions. First, it provides an approach to semantically enhance LaTeX expressions with sufficient semantic information for translations into CAS syntaxes. Second, it demonstrates the first context-aware LaTeX to CAS translation framework LaCASt. Third, the thesis provides a novel approach to evaluate the performance for LaTeX to CAS translations on large-scaled datasets with an automatic verification of equations in digital mathematical libraries. This is an open access book

    Simulations, explication, compréhension : essai d’analyse critique

    Get PDF
    J’analyse dans cet article la valeur explicative que peuvent avoir les simulations numériques. On rencontre en effet souvent l’affirmation selon laquelle les simulations permettent de prédire, de reproduire ou d’imiter des phénomènes, mais guère de les expliquer. Les simulations rendraient aussi possible l’étude du comportement d’un système par la force brute du calcul mais n’apporteraient pas une compréhension réelle de ce système et de son comportement. Dans tous les cas, il semble que, à tort ou à raison, les simulations posent, du point de vue de leur valeur explicative, des problèmes spécifiques qu’il convient de démêler et de décrire précisément. J’essaie dans cet article d’analyser systématiquement ces problèmes en utilisant comme guide les théories existantes de l’explication. J’analyse d’abord le rapport des simulations à la vérité (section 2). J’examine ensuite en quoi les simulations satisfont ou non les exigences de déductivité et de nomicité, qui jouent un rôle central dans le modèle de l’explication de Hempel (section 3). J’étudie dans quelle mesure les simulations sont aptes à véhiculer l’information causale pertinente qu’on attend d’une bonne explication (section 4). Je poursuis en analysant en quoi l’abondance informationnelle et la lourdeur computationnelle des simulations peut sembler problématique par rapport au développement de nos connaissances explicatives et de notre compréhension des phénomènes (section 5). J’analyse enfin en quoi les simulations ont un rôle unificateur comme cela est attendu des bonnes explications (section 6). Au final, cette étude permet de comprendre plus précisément pourquoi les simulations, alors même qu’elles semblent pouvoir satisfaire les conditions que doivent remplir les bonnes explications, semblent spécifiquement problématiques au regard de l’activité explicative. Je suggère que les raisons sont notamment à chercher dans l’épistémologie de l’activité explicative, dans les attentes méthodologiques envers les bonnes explications et dans l’usage spécifique qu’on fait des simulations pour l’étude des cas difficiles – en plus du fait que les simulations constituent une activité qui n’est plus à taille humaine.In this paper I investigate the potential explanatory value of computer simulations, which are often said to be suitable tools for the prediction, emulation or imitation of phenomena, but not for their explanation. Similarly, they are described as providing brute force (number-crunching) methods that are helpful to investigate the behaviour of physical systems but less so when it comes to understanding them. Be this as it may, wrongly or not simulations seem to be raising specific problems concerning their potential explanatory value. I try to systematically analyze these problems and for this purpose use existing theories of explanations as analytical guidelines. I first investigate how simulations fare concerning truth (section 2) and proceed by analyzing whether they satisfy the deductivity and nomicity requirements, which play a central role in Hempel’s model of explanation (section 3). I also study whether simulations are appropriate vehicles for relevant causal information which a good explanation is expected to provide (section 4). I then analyze how much the informational repleteness and the inferential or computational stodginess of computer simulations are problems for the development of explanatory knowledge and our understanding of phenomena (section 5). I finally analyze how much computer simulations can have a unificatory role as good explanations arguably do (section 6). Overall, I try to analyze why computer simulations still raise specific problems concerning explanations while seemingly being able to meet the criteria that explanations should fulfil. I suggest that some of these reasons are to be found in the epistemology of explanatory activity, in methodological expectations towards good explanations and in the specific uses that are made of computer simulations to investigate complex systems—in addition to the fact that computer simulations are not human-sized activities and that, as such, they may fail to provide first person understanding of phenomena

    Making Presentation Math Computable

    Get PDF
    This Open-Access-book addresses the issue of translating mathematical expressions from LaTeX to the syntax of Computer Algebra Systems (CAS). Over the past decades, especially in the domain of Sciences, Technology, Engineering, and Mathematics (STEM), LaTeX has become the de-facto standard to typeset mathematical formulae in publications. Since scientists are generally required to publish their work, LaTeX has become an integral part of today's publishing workflow. On the other hand, modern research increasingly relies on CAS to simplify, manipulate, compute, and visualize mathematics. However, existing LaTeX import functions in CAS are limited to simple arithmetic expressions and are, therefore, insufficient for most use cases. Consequently, the workflow of experimenting and publishing in the Sciences often includes time-consuming and error-prone manual conversions between presentational LaTeX and computational CAS formats. To address the lack of a reliable and comprehensive translation tool between LaTeX and CAS, this thesis makes the following three contributions. First, it provides an approach to semantically enhance LaTeX expressions with sufficient semantic information for translations into CAS syntaxes. Second, it demonstrates the first context-aware LaTeX to CAS translation framework LaCASt. Third, the thesis provides a novel approach to evaluate the performance for LaTeX to CAS translations on large-scaled datasets with an automatic verification of equations in digital mathematical libraries. This is an open access book

    Social processes, program verification and all that

    Get PDF
    Contains fulltext : 75843.pdf (publisher's version ) (Open Access)24 p
    corecore