94,016 research outputs found

    The modal logic of Reverse Mathematics

    Full text link
    The implication relationship between subsystems in Reverse Mathematics has an underlying logic, which can be used to deduce certain new Reverse Mathematics results from existing ones in a routine way. We use techniques of modal logic to formalize the logic of Reverse Mathematics into a system that we name s-logic. We argue that s-logic captures precisely the "logical" content of the implication and nonimplication relations between subsystems in Reverse Mathematics. We present a sound, complete, decidable, and compact tableau-style deductive system for s-logic, and explore in detail two fragments that are particularly relevant to Reverse Mathematics practice and automated theorem proving of Reverse Mathematics results

    "If-then" as a version of "Implies"

    Get PDF
    Russell’s role in the controversy about the paradoxes of material implication is usually presented as a tale of how even the greatest minds can fall prey of basic conceptual confusions. Quine accused him of making a silly mistake in Principia Mathematica. He interpreted “if- then” as a version of “implies” and called it material implication. Quine’s accusation is that this decision involved a use-mention fallacy because the antecedent and consequent of “if- then” are used instead of being mentioned as the premise and the conclusion of an implication relation. It was his opinion that the criticisms and alternatives to the material implication presented by C. I. Lewis and others would never be made in the first place if Russell simply called the Philonian construction “material conditional” instead of “material implication”. Quine’s interpretation on the topic became hugely influential, if not universally accepted. This paper will present the following criticisms against this interpretation: (1) the notion of material implication does not involve a use-mention fallacy, since the components of “if-then” are mentioned and not used; (2) Quine’s belief that the components of “if-then” are used was motivated by a conditional-assertion view of conditionals that is widely controversial and faces numerous difficulties; (3) if anything, it was Quine who could be accused of fallacious reasoning: he ignored that in the assertion of a conditional is the whole proposition that is asserted and not its constituents; (4) the Philonian construction remains counter-intuitive even if it is called “material conditional”; (5) the Philonian construction is more plausible when it is interpreted as a material implication

    Moderate Modal Skepticism

    Get PDF
    This paper examines "moderate modal skepticism", a form of skepticism about metaphysical modality defended by Peter van Inwagen in order to blunt the force of certain modal arguments in the philosophy of religion. Van Inwagen’s argument for moderate modal skepticism assumes Yablo's (1993) influential world-based epistemology of possibility. We raise two problems for this epistemology of possibility, which undermine van Inwagen's argument. We then consider how one might motivate moderate modal skepticism by relying on a different epistemology of possibility, which does not face these problems: Williamson’s (2007: ch. 5) counterfactual-based epistemology. Two ways of motivating moderate modal skepticism within that framework are found unpromising. Nevertheless, we also find a way of vindicating an epistemological thesis that, while weaker than moderate modal skepticism, is strong enough to support the methodological moral van Inwagen wishes to draw

    The natural history of bugs: using formal methods to analyse software related failures in space missions

    Get PDF
    Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 18million.Payloadsrangefrom18 million. Payloads range from 4 million up to 1billionforsecurityrelatedsatellites.Thesecostsdonotincludeconsequentbusinesslosses.In2005,Intelsatwroteoff1 billion for security related satellites. These costs do not include consequent business losses. In 2005, Intelsat wrote off 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions

    G\"odel's Notre Dame Course

    Full text link
    This is a companion to a paper by the authors entitled "G\"odel's natural deduction", which presented and made comments about the natural deduction system in G\"odel's unpublished notes for the elementary logic course he gave at the University of Notre Dame in 1939. In that earlier paper, which was itself a companion to a paper that examined the links between some philosophical views ascribed to G\"odel and general proof theory, one can find a brief summary of G\"odel's notes for the Notre Dame course. In order to put the earlier paper in proper perspective, a more complete summary of these interesting notes, with comments concerning them, is given here.Comment: 18 pages. minor additions, arXiv admin note: text overlap with arXiv:1604.0307

    Knowability Relative to Information

    Get PDF
    We present a formal semantics for epistemic logic, capturing the notion of knowability relative to information (KRI). Like Dretske, we move from the platitude that what an agent can know depends on her (empirical) information. We treat operators of the form K_AB (‘B is knowable on the basis of information A’) as variably strict quantifiers over worlds with a topic- or aboutness- preservation constraint. Variable strictness models the non-monotonicity of knowledge acquisition while allowing knowledge to be intrinsically stable. Aboutness-preservation models the topic-sensitivity of information, allowing us to invalidate controversial forms of epistemic closure while validating less controversial ones. Thus, unlike the standard modal framework for epistemic logic, KRI accommodates plausible approaches to the Kripke-Harman dogmatism paradox, which bear on non-monotonicity, or on topic-sensitivity. KRI also strikes a better balance between agent idealization and a non-trivial logic of knowledge ascriptions

    An alternative Gospel of structure: order, composition, processes

    Full text link
    We survey some basic mathematical structures, which arguably are more primitive than the structures taught at school. These structures are orders, with or without composition, and (symmetric) monoidal categories. We list several `real life' incarnations of each of these. This paper also serves as an introduction to these structures and their current and potentially future uses in linguistics, physics and knowledge representation.Comment: Introductory chapter to C. Heunen, M. Sadrzadeh, and E. Grefenstette. Quantum Physics and Linguistics: A Compositional, Diagrammatic Discourse. Oxford University Press, 201

    Maximum entropy methods as the bridge between macroscopic and microscopic theory

    Get PDF
    This paper investigates a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and infinity before taking bounded variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomials, are addressed with examples demonstrating the failure of the Taylor approximation to preserve shape properties of the singular potential
    • 

    corecore