106 research outputs found

    The Significance of Evidence-based Reasoning for Mathematics, Mathematics Education, Philosophy and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines

    Ciências da computação de Alan Turing: Uma viagem pessoal

    Get PDF
    [Excerto] O moderno cientista da computação, interessado na correção dos seus artefatos, navega num terreno construído sobre as respostas a algumas questões fundamentais. Primeiro ele/ela pode perguntar “O que é demonstrável, refutável ou decidível?”, imediatamente seguido de “O que é inconsistente e o que pode ser validado?”. Mais ainda, é seguro dizer que, quando comparado com o âmbito do artefacto, ele/ela terá sobretudo que basear-se em respostas a outra questão “o que é que se pode aprender?”. Finalmente, quando o interesse se estende à segurança do artefacto e à privacidade do utilizador, o cenário é complicado por questões adicionais, nomeadamente “O que é aleatório? O que é obscuro e à prova de fuga, no sentido da teoria da informação?”. Através de toda a paisagem da Ciência e Engenharia da Computação (CEC) podem detetar-se as pegadas de Turing, algumas desmaiadas, mas outras muito claras e incisivas. Nesta breve monografia tentarei delinear algumas dessas pegadas, escolhendo aquelas que eu acredito serem as mais significativas para a minha visão da CEC. A questão que mais interessou Turing não foi nenhuma das anteriores mas antes O que é computável e, de entre essas ‘coisas computáveis’, o que é fazível? É bem conhecido o papel de Turing (Turing 1936; Soare 2016) na construção de uma resposta pelo menos à primeira parte da questão. A fazibilidade das computações veio mais tarde e é o domínio da Teoria da Complexidade; a sua importância e o papel de Turing foram reconhecidos há algum tempo (Hartmanis 1994). Algumas décadas antes, o início do Séc. XX viu o florescimento de novos fundamentos para o pensamento matemático: a axiomatização dos conjuntos de Zermelo-Fraenkel, construtivismo & intuicionismo e os problemas de decisão de Hilbert são alguns dos programas2 desta era; para além do seu impacto na matemática como um todo, estes programas tiveram um efeito direto nos fundamentos da CEC; de facto, podemos dizer que estes “são” três quartos dos fundamentos da CEC, o restante quarto sendo, obviamente, a computabilidade. [...

    Three Dogmas of First-Order Logic and some Evidence-based Consequences for Constructive Mathematics of differentiating between Hilbertian Theism, Brouwerian Atheism and Finitary Agnosticism

    Get PDF
    We show how removing faith-based beliefs in current philosophies of classical and constructive mathematics admits formal, evidence-based, definitions of constructive mathematics; of a constructively well-defined logic of a formal mathematical language; and of a constructively well-defined model of such a language. We argue that, from an evidence-based perspective, classical approaches which follow Hilbert's formal definitions of quantification can be labelled `theistic'; whilst constructive approaches based on Brouwer's philosophy of Intuitionism can be labelled `atheistic'. We then adopt what may be labelled a finitary, evidence-based, `agnostic' perspective and argue that Brouwerian atheism is merely a restricted perspective within the finitary agnostic perspective, whilst Hilbertian theism contradicts the finitary agnostic perspective. We then consider the argument that Tarski's classic definitions permit an intelligence---whether human or mechanistic---to admit finitary, evidence-based, definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways. We show that the two definitions correspond to two distinctly different---not necessarily evidence-based but complementary---assignments of satisfaction and truth to the compound formulas of PA over N. We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both the complementary interpretations; and conclude some unsuspected constructive consequences of such complementarity for the foundations of mathematics, logic, philosophy, and the physical sciences

    The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines

    Interpreting quantum nonlocality as platonic information

    Get PDF
    The "hidden variables" or "guiding equation" explanation for the measurement of quantum nonlocality (entanglement) effects can be interpreted as instantiation of Platonic information. Because these Bohm-deBroglie principles are already external to the material objects that they theoretically affect, interpreting them as Platonic is feasible. Taking an approach partially suggested by Quantum Information Theory which views quantum phenomena as sometimes observable-measurable information, this thesis defines hidden variables/guiding equation as information. This approach enables us to bridge the divide between the abstract Platonic realm and the physical world. The unobservable quantum wavefunction collapse is interpreted as Platonic instantiation. At each interaction, the wave function for a quantum system collapses. Instantly, Platonic information is instantiated in the system

    Computations and Computers in the Sciences of Mind and Brain

    Get PDF
    Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience

    A predicative variant of a realizability tripos for the Minimalist Foundation.

    Get PDF
    open2noHere we present a predicative variant of a realizability tripos validating the intensional level of the Minimalist Foundation extended with Formal Church thesis.the file attached contains the whole number of the journal including the mentioned pubblicationopenMaietti, Maria Emilia; Maschio, SamueleMaietti, MARIA EMILIA; Maschio, Samuel

    Fexprs as the basis of Lisp function application; or, $vau: the ultimate abstraction

    Get PDF
    Abstraction creates custom programming languages that facilitate programming for specific problem domains. It is traditionally partitioned according to a two-phase model of program evaluation, into syntactic abstraction enacted at translation time, and semantic abstraction enacted at run time. Abstractions pigeon-holed into one phase cannot interact freely with those in the other, since they are required to occur at logically distinct times. Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction, but is enacted at run-time, thus eliminating the phase barrier between abstractions. Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s. This dissertation contends that the severe difficulties attendant on fexprs in the past are not essential, and can be overcome by judicious coordination with other elements of language design. In particular, fexprs can form the basis for a simple, well-behaved Scheme-like language, subsuming traditional abstractions without a multi-phase model of evaluation. The thesis is supported by a new Scheme-like language called Kernel, created for this work, in which each Scheme-style procedure consists of a wrapper that induces evaluation of operands, around a fexpr that acts on the resulting arguments. This arrangement enables Kernel to use a simple direct style of selectively evaluating subexpressions, in place of most Lisps\u27 indirect quasiquotation style of selectively suppressing subexpression evaluation. The semantics of Kernel are treated through a new family of formal calculi, introduced here, called vau calculi. Vau calculi use direct subexpression-evaluation style to extend lambda calculus, eliminating a long-standing incompatibility between lambda calculus and fexprs that would otherwise trivialize their equational theories. The impure vau calculi introduce non-functional binding constructs and unconventional forms of substitution. This strategy avoids a difficulty of Felleisen\u27s lambda-v-CS calculus, which modeled impure control and state using a partially non-compatible reduction relation, and therefore only approximated the Church-Rosser and Plotkin\u27s Correspondence Theorems. The strategy here is supported by an abstract class of Regular Substitutive Reduction Systems, generalizing Klop\u27s Regular Combinatory Reduction Systems

    Guide to Discrete Mathematics

    Get PDF

    Frameworks, models, and case studies

    Get PDF
    This thesis focuses on models of conceptual change in science and philosophy. In particular, I developed a new bootstrapping methodology for studying conceptual change, centered around the formalization of several popular models of conceptual change and the collective assessment of their improved formal versions via nine evaluative dimensions. Among the models of conceptual change treated in the thesis are Carnap’s explication, Lakatos’ concept-stretching, Toulmin’s conceptual populations, Waismann’s open texture, Mark Wilson’s patches and facades, Sneed’s structuralism, and Paul Thagard’s conceptual revolutions. In order to analyze and compare the conception of conceptual change provided by these different models, I rely on several historical reconstructions of episodes of scientific conceptual change. The historical episodes of scientific change that figure in this work include the emergence of the morphological concept of fish in biological taxonomies, the development of scientific conceptions of temperature, the Church-Turing thesis and related axiomatizations of effective calculability, the history of the concept of polyhedron in 17th and 18th century mathematics, Hamilton’s invention of the quaternions, the history of the pre-abstract group concepts in 18th and 19th century mathematics, the expansion of Newtonian mechanics to viscous fluids forces phenomena, and the chemical revolution. I will also present five different formal and informal improvements of four specific models of conceptual change. I will first present two different improvements of Carnapian explication, a formal and an informal one. My informal improvement of Carnapian explication will consist of a more fine-grained version of the procedure that adds an intermediate, third step to the two steps of Carnapian explication. I will show how this novel three-step version of explication is more suitable than its traditional two-step relative to handle complex cases of explications. My second, formal improvement of Carnapian explication will be a full explication of the concept of explication itself within the theory of conceptual spaces. By virtue of this formal improvement, the whole procedure of explication together with its application procedures and its pragmatic desiderata will be reconceptualized as a precise procedure involving topological and geometrical constraints inside the theory of conceptual spaces. My third improved model of conceptual change will consist of a formal explication of Darwinian models of conceptual change that will make vast use of Godfrey-Smith’s population-based Darwinism for targeting explicitly mathematical conceptual change. My fourth improvement will be dedicated instead to Wilson’s indeterminate model of conceptual change. I will show how Wilson’s very informal framework can be explicated within a modified version of the structuralist model-theoretic reconstructions of scientific theories. Finally, the fifth improved model of conceptual change will be a belief-revision-like logical framework that reconstructs Thagard’s model of conceptual revolution as specific revision and contraction operations that work on conceptual structures. At the end of this work, a general conception of conceptual change in science and philosophy emerges, thanks to the combined action of the three layers of my methodology. This conception takes conceptual change to be a multi-faceted phenomenon centered around the dynamics of groups of concepts. According to this conception, concepts are best reconstructed as plastic and inter-subjective entities equipped with a non-trivial internal structure and subject to a certain degree of localized holism. Furthermore, conceptual dynamics can be judged from a weakly normative perspective, bound to be dependent on shared values and goals. Conceptual change is then best understood, according to this conception, as a ubiquitous phenomenon underlying all of our intellectual activities, from science to ordinary linguistic practices. As such, conceptual change does not pose any particular problem to value-laden notions of scientific progress, objectivity, and realism. At the same time, this conception prompts all our concept-driven intellectual activities, including philosophical and metaphilosophical reflections, to take into serious consideration the phenomenon of conceptual change. An important consequence of this conception, and of the analysis that generated it, is in fact that an adequate understanding of the dynamics of philosophical concepts is a prerequisite for analytic philosophy to develop a realistic and non-idealized depiction of itself and its activities
    corecore