116 research outputs found

    A Formal Proof of the Expressiveness of Deep Learning

    Get PDF
    International audienceDeep learning has had a profound impact on computer science in recent years, with applications to image recognition, language processing, bioinformatics, and more. Recently , Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. We formalized their mathematical proof using Isabelle/HOL. The Isabelle development simplifies and generalizes the original proof, while working around the limitations of the HOL type system. To support the formalization, we developed reusable libraries of formalized mathematics, including results about the matrix rank, the Borel measure, and multivariate polynomials as well as a library for tensor analysis

    Representing grammar, meaning and knowledge

    Get PDF
    Among the expertises relevant for successful natural language understanding are grammar, meaning and background knowledge, all of which must be represented in order to decode messages from text (or speech). The present paper is a sketch of one cooperation of grammar and meaning representations -- with some remarks about knowledge representation -- which allows that the representations involved be heterogeneous even while cooperating closely. The modules cooperate in what might be called a PLURALIST fashion, with few assumptions about the representations involved. In point of fact, the proposal is compatible with state-of-the-art representations from all three areas. The paper proceeeds from the nearly universal assumption that the grammar formalism is feature-based and insufficiently expressive for use in meaning representation. It then demonstrates how feature formalisms may be employed as a semantic metalanguage in order that semantic constraints may be expressed in a single formalism with grammatical constraints. This allows a tight coupling of syntax and semantics, the incorporation of nonsyntactic constraints (e.g., from knowledge representation) and the opportunity to underspecify meanings in novel ways -- including, e.g., ways which distinguish ambiguity and underspecification (vagueness). We retain scepticism vis-Ă -vis more ASSIMILATIONIST proposals for the interaction of these -- i.e., proposals which foresee common formalisms for grammar, meaning and knowledge representation. While such proposals rightfully claim to allow for closer integration, they fail to account for the motivations which distinguish formalisms - elaborate expressive strength in the case of semantic representations, monotonic (and preferably decidable) computation in the case of grammar formalisms, and the characterization of taxonomic reasoning in the case of knowledge representation

    Tactic-Based Modeling of Cognitive Inference on Logically StructuredNotation

    Full text link
    Computational (algorithmic) models of high-level cognitive inference tasks such as logical inference, mathematical inference, and decision making can have both theoretical and practical impact. They can improve our theoretical understanding of how people think and also provide practical direction for applications such as automated reasoning systems, systems attuned to user-interaction in decision-critical environments, and computer-aided education. To support those benefits, cognitive models need to be detailed, compositional, based in well-understood mathematics, and, to whatever extent possible, descriptively accurate. We introduce a new, interdisciplinary approach that could be used to develop cognitive models of high-level inference with these properties. Two significant aspects of this approach are tactics and eyetracking methods. Tactics are used to express high-level inferences in fully formalized mathematics for automated theorem proving systems; eyetracking methods provide insight into real-time and microcognitive information processing by permitting analysis of the visual attention of people performing cognitive tasks. Combining tactics and eyetracking methods with traditional techniques from applied logic, artificial intelligence, and cognitive science can result in more deeply detailed and accurate cognitive models. We demonstrate the feasibility of this new approach to modeling by describing its application to a calculational logic system that supports schematic reasoning via metalinguistic operations (such as textual substitution) without resorting to higher-order logic. We discuss several computational, psychological, and pedagogical insights that resulted from this approach, and we present a detailed, tactic-based model of calculational logic inference. Specific results include: an explanation of calculational logic as a formalized metalogic; a tactic-based implementation of calculational logic inference; some pedagogical observations on the teaching of calculational logic; and experimental results that demonstrate that eyetracking methods can provide insight into theorem proving that could not be achieved by studies of written work alone

    Visualizing Quantum Circuit Probability -- estimating computational action for quantum program synthesis

    Full text link
    This research applies concepts from algorithmic probability to Boolean and quantum combinatorial logic circuits. A tutorial-style introduction to states and various notions of the complexity of states are presented. Thereafter, the probability of states in the circuit model of computation is defined. Classical and quantum gate sets are compared to select some characteristic sets. The reachability and expressibility in a space-time-bounded setting for these gate sets are enumerated and visualized. These results are studied in terms of computational resources, universality and quantum behavior. The article suggests how applications like geometric quantum machine learning, novel quantum algorithm synthesis and quantum artificial general intelligence can benefit by studying circuit probabilities.Comment: 17 page

    PHILOSOPHICAL APPROACHES TO EVALUATING CRITICAL THINKING AS DIMINISHED EMPATHY: A QUALITATIVE ANALYSIS OF NEWS FRAMING OF STUDENT LOAN FORGIVENESS

    Get PDF
    Critical thinking has long been recognized across disciplines as being solely rooted in problem-solving and logical argument construction. By using Miranda Fricker’s Epistemic Injustice: The Power and Ethics of Knowing as a core theoretical framework, this study aims to deconstruct the ways in how news framing has shaped critical thinking over vast periods of time through an exploration into the ways in which thinking has been socially understood in an otherwise largely technologically immersed world. Using a rhetorical criticism approach, 33 news articles and segments are analyzed from a variety of popular news sources from several platforms that are commonly used mediums for information. Findings indicate that framing bias echoes hermeneutic injustice propagandizing systematic devaluation of individuated experience through use of numeric abstraction. Future research directions include an exploration into methods of cultural shift to reconsider empathy and creativity as an integral part of critical thinking as an extension of mathematics and logic

    Logics of Responsibility

    Get PDF
    The study of responsibility is a complicated matter. The term is used in different ways in different fields, and it is easy to engage in everyday discussions as to why someone should be considered responsible for something. Typically, the backdrop of these discussions involves social, legal, moral, or philosophical problems. A clear pattern in all these spheres is the intent of issuing standards for when---and to what extent---an agent should be held responsible for a state of affairs. This is where Logic lends a hand. The development of expressive logics---to reason about agents' decisions in situations with moral consequences---involves devising unequivocal representations of components of behavior that are highly relevant to systematic responsibility attribution and to systematic blame-or-praise assignment. To put it plainly, expressive syntactic-and-semantic frameworks help us analyze responsibility-related problems in a methodical way. This thesis builds a formal theory of responsibility. The main tool used toward this aim is modal logic and, more specifically, a class of modal logics of action known as stit theory. The underlying motivation is to provide theoretical foundations for using symbolic techniques in the construction of ethical AI. Thus, this work means a contribution to formal philosophy and symbolic AI. The thesis's methodology consists in the development of stit-theoretic models and languages to explore the interplay between the following components of responsibility: agency, knowledge, beliefs, intentions, and obligations. Said models are integrated into a framework that is rich enough to provide logic-based characterizations for three categories of responsibility: causal, informational, and motivational responsibility. The thesis is structured as follows. Chapter 2 discusses at length stit theory, a logic that formalizes the notion of agency in the world over an indeterministic conception of time known as branching time. The idea is that agents act by constraining possible futures to definite subsets. On the road to formalizing informational responsibility, Chapter 3 extends stit theory with traditional epistemic notions (knowledge and belief). Thus, the chapter formalizes important aspects of agents' reasoning in the choice and performance of actions. In a context of responsibility attribution and excusability, Chapter 4 extends epistemic stit theory with measures of optimality of actions that underlie obligations. In essence, this chapter formalizes the interplay between agents' knowledge and what they ought to do. On the road to formalizing motivational responsibility, Chapter 5 adds intentions and intentional actions to epistemic stit theory and reasons about the interplay between knowledge and intentionality. Finally, Chapter 6 merges the previous chapters' formalisms into a rich logic that is able to express and model different modes of the aforementioned categories of responsibility. Technically, the most important contributions of this thesis lie in the axiomatizations of all the introduced logics. In particular, the proofs of soundness & completeness results involve long, step-by-step procedures that make use of novel techniques

    Ptarithmetic

    Get PDF
    The present article introduces ptarithmetic (short for "polynomial time arithmetic") -- a formal number theory similar to the well known Peano arithmetic, but based on the recently born computability logic (see http://www.cis.upenn.edu/~giorgi/cl.html) instead of classical logic. The formulas of ptarithmetic represent interactive computational problems rather than just true/false statements, and their "truth" is understood as existence of a polynomial time solution. The system of ptarithmetic elaborated in this article is shown to be sound and complete. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a polynomial time solution and, furthermore, such a solution can be effectively extracted from a proof of T. And complete in the sense that every interactive number-theoretic problem with a polynomial time solution is represented by some theorem T of the system. The paper is self-contained, and can be read without any previous familiarity with computability logic.Comment: Substantially better versions are on their way. Hence the present article probably will not be publishe

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions
    • …
    corecore