44 research outputs found

    Ramsification and semantic indeterminacy

    Get PDF
    Is it possible to maintain classical logic, stay close to classical semantics, and yet accept that language might be semantically indeterminate? The article gives an affirmative answer by Ramsifying classical semantics, which yields a new semantic theory that remains much closer to classical semantics than supervaluationism but which at the same time avoids the problematic classical presupposition of semantic determinacy. The resulting Ramsey semantics is developed in detail, it is shown to supply a classical concept of truth and to fully support the rules and metarules of classical logic, and it is applied to vague terms as well as to theoretical or open-ended terms from mathematics and science. The theory also demonstrates how diachronic or synchronic interpretational continuity across languages is compatible with semantic indeterminacy

    Vindicating the verifiability criterion

    Get PDF

    Logic and Philosophy. A Reconstruction

    Get PDF
    The article recapitulates what logic is about traditionally and works out two roles it has been playing in philosophy: the role of an instrument and of a philosophical discipline in its own right. Using Tarski’s philosophical-logical work as case study, it develops a logical reconstructionist methodology of philosophical logic that extends and refines Rudolf Carnap’s account of explication and rational reconstruction. The methodology overlaps with, but also partially diverges from, contemporary anti-exceptionalism about logic

    Mechanizing Induction

    Get PDF
    In this chapter we will deal with “mechanizing” induction, i.e. with ways in which theoretical computer science approaches inductive generalization. In the field of Machine Learning, algorithms for induction are developed. Depending on the form of the available data, the nature of these algorithms may be very different. Some of them combine geometric and statistical ideas, while others use classical reasoning based on logical formalism. However, we are not so much interested in the algorithms themselves, but more on the philosophical and theoretical foundations they share. Thus in the first of two parts, we will examine different approaches and inductive assumptions in two particular learning settings. While many machine learning algorithms work well on a lot of tasks, the interpretation of the learned hypothesis is often difficult. Thus, while e.g. an algorithm surprisingly is able to determine the gender of the author of a given text with about 80 percent accuracy [Argamon and Shimoni, 2003], for a human it takes some extra effort to understand on the basis of which criteria the algorithm is able to do so. With that respect the advantage of approaches using logic are obvious: If the output hypothesis is a formula of predicate logic, it is easy to interpret. However, if decision trees or algorithms from the area of inductive logic programming are based purely on classical logic, they suffer from the fact that most universal statements do not hold for exceptional cases, and classical logic does not offer any convenient way of representing statements which are meant to hold in the “normal case”. Thus, in the second part we will focus on approaches for Nonmonotonic Reasoning that try to handle this problem. Both Machine Learning and Nonmonotonic Reasoning have been anticipated partially by work in philosophy of science and philosophical logic. At the same time, recent developments in theoretical computer science are expected to trig- ger further progress in philosophical theories of inference, confirmation, theory revision, learning, and the semantic and pragmatics of conditionals. We hope this survey will contribute to this kind of progress by building bridges between computational, logical, and philosophical accounts of induction

    Probability for the Revision Theory of Truth

    Get PDF
    We investigate how to assign probabilities to sentences that contain a type-free truth predicate. These probability values track how often a sentence is satisfied in transfinite revision sequences, following Gupta and Belnap’s revision theory of truth. This answers an open problem by Leitgeb which asks how one might describe transfinite stages of the revision sequence using such probability functions. We offer a general construction, and explore additional constraints that lead to desirable properties of the resulting probability function. One such property is Leitgeb’s Probabilistic Convention T, which says that the probability of φ equals the probability that φ is true.publishe
    corecore