7,205 research outputs found

    Comparing theories: the dynamics of changing vocabulary. A case-study in relativity theory

    Full text link
    There are several first-order logic (FOL) axiomatizations of special relativity theory in the literature, all looking essentially different but claiming to axiomatize the same physical theory. In this paper, we elaborate a comparison, in the framework of mathematical logic, between these FOL theories for special relativity. For this comparison, we use a version of mathematical definability theory in which new entities can also be defined besides new relations over already available entities. In particular, we build an interpretation of the reference-frame oriented theory SpecRel into the observationally oriented Signalling theory of James Ax. This interpretation provides SpecRel with an operational/experimental semantics. Then we make precise, "quantitative" comparisons between these two theories via using the notion of definitional equivalence. This is an application of logic to the philosophy of science and physics in the spirit of Johan van Benthem's work.Comment: 27 pages, 8 figures. To appear in Springer Book series Trends in Logi

    Lewis meets Brouwer: constructive strict implication

    Full text link
    C. I. Lewis invented modern modal logic as a theory of "strict implication". Over the classical propositional calculus one can as well work with the unary box connective. Intuitionistically, however, the strict implication has greater expressive power than the box and allows to make distinctions invisible in the ordinary syntax. In particular, the logic determined by the most popular semantics of intuitionistic K becomes a proper extension of the minimal normal logic of the binary connective. Even an extension of this minimal logic with the "strength" axiom, classically near-trivial, preserves the distinction between the binary and the unary setting. In fact, this distinction and the strong constructive strict implication itself has been also discovered by the functional programming community in their study of "arrows" as contrasted with "idioms". Our particular focus is on arithmetical interpretations of the intuitionistic strict implication in terms of preservativity in extensions of Heyting's Arithmetic.Comment: Our invited contribution to the collection "L.E.J. Brouwer, 50 years later

    The Small-Is-Very-Small Principle

    Full text link
    The central result of this paper is the small-is-very-small principle for restricted sequential theories. The principle says roughly that whenever the given theory shows that a property has a small witness, i.e. a witness in every definable cut, then it shows that the property has a very small witness: i.e. a witness below a given standard number. We draw various consequences from the central result. For example (in rough formulations): (i) Every restricted, recursively enumerable sequential theory has a finitely axiomatized extension that is conservative w.r.t. formulas of complexity ≤n\leq n. (ii) Every sequential model has, for any nn, an extension that is elementary for formulas of complexity ≤n\leq n, in which the intersection of all definable cuts is the natural numbers. (iii) We have reflection for Σ20\Sigma^0_2-sentences with sufficiently small witness in any consistent restricted theory UU. (iv) Suppose UU is recursively enumerable and sequential. Suppose further that every recursively enumerable and sequential VV that locally inteprets UU, globally interprets UU. Then, UU is mutually globally interpretable with a finitely axiomatized sequential theory. The paper contains some careful groundwork developing partial satisfaction predicates in sequential theories for the complexity measure depth of quantifier alternations

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    European Union regulations on algorithmic decision-making and a "right to explanation"

    Get PDF
    We summarize the potential impact that the European Union's new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also effectively create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.Comment: presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, N

    Some definable Galois theory and examples

    Full text link
    We make explicit certain results around the Galois correspondence in the context of definable automorphism groups, and point out the relation to some recent papers dealing with the Galois theory of algebraic differential equations when the constants are not "closed" in suitable senses. We also improve the definitions and results on generalized strongly normal extensions.Comment: arXiv admin note: text overlap with arXiv:1309.633

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system
    • …
    corecore