5,298 research outputs found

    Don't Blame Distributional Semantics if it can't do Entailment

    Get PDF
    Distributional semantics has had enormous empirical success in Computational Linguistics and Cognitive Science in modeling various semantic phenomena, such as semantic similarity, and distributional models are widely used in state-of-the-art Natural Language Processing systems. However, the theoretical status of distributional semantics within a broader theory of language and cognition is still unclear: What does distributional semantics model? Can it be, on its own, a fully adequate model of the meanings of linguistic expressions? The standard answer is that distributional semantics is not fully adequate in this regard, because it falls short on some of the central aspects of formal semantic approaches: truth conditions, entailment, reference, and certain aspects of compositionality. We argue that this standard answer rests on a misconception: These aspects do not belong in a theory of expression meaning, they are instead aspects of speaker meaning, i.e., communicative intentions in a particular context. In a slogan: words do not refer, speakers do. Clearing this up enables us to argue that distributional semantics on its own is an adequate model of expression meaning. Our proposal sheds light on the role of distributional semantics in a broader theory of language and cognition, its relationship to formal semantics, and its place in computational models.Comment: To appear in Proceedings of the 13th International Conference on Computational Semantics (IWCS 2019), Gothenburg, Swede

    Suszko's Problem: Mixed Consequence and Compositionality

    Full text link
    Suszko's problem is the problem of finding the minimal number of truth values needed to semantically characterize a syntactic consequence relation. Suszko proved that every Tarskian consequence relation can be characterized using only two truth values. Malinowski showed that this number can equal three if some of Tarski's structural constraints are relaxed. By so doing, Malinowski introduced a case of so-called mixed consequence, allowing the notion of a designated value to vary between the premises and the conclusions of an argument. In this paper we give a more systematic perspective on Suszko's problem and on mixed consequence. First, we prove general representation theorems relating structural properties of a consequence relation to their semantic interpretation, uncovering the semantic counterpart of substitution-invariance, and establishing that (intersective) mixed consequence is fundamentally the semantic counterpart of the structural property of monotonicity. We use those to derive maximum-rank results proved recently in a different setting by French and Ripley, as well as by Blasio, Marcos and Wansing, for logics with various structural properties (reflexivity, transitivity, none, or both). We strengthen these results into exact rank results for non-permeable logics (roughly, those which distinguish the role of premises and conclusions). We discuss the underlying notion of rank, and the associated reduction proposed independently by Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve compositionality in general, meaning that the resulting semantics is no longer truth-functional. We propose a modification of that notion of reduction, allowing us to prove that over compact logics with what we call regular connectives, rank results are maintained even if we request the preservation of truth-functionality and additional semantic properties.Comment: Keywords: Suszko's thesis; truth value; logical consequence; mixed consequence; compositionality; truth-functionality; many-valued logic; algebraic logic; substructural logics; regular connective

    LOGICAL ANALYSIS AND LATER MOHIST LOGIC: SOME COMPARATIVE REFLECTIONS [abstract]

    Get PDF
    Any philosophical method that treats the analysis of the meaning of a sentence or expression in terms of a decomposition into a set of conceptually basic constituent parts must do some theoretical work to explain the puzzles of intensionality. This is because intensional phenomena appear to violate the principle of compositionality, and the assumption of compositionality is the principal justification for thinking that an analysis will reveal the real semantical import of a sentence or expression through a method of decomposition. Accordingly, a natural strategy for dealing with intensionality is to argue that it is really just an isolable, aberrant class of linguistic phenomena that poses no general threat to the thesis that meaning is basically compositional. On the other hand, the later Mohists give us good reason to reject this view. What we learn from them is that there may be basic limitations in any analytical technique that presupposes that meaning is perspicuously represented only when it has been fully decomposed into its constituent parts. The purpose of this paper is to (a) explain why the Mohists found the issue of intensionality to be so important in their investigations of language, and (b) defend the view that Mohist insights reveal basic limitations in any technique of analysis that is uncritically applied with a decompositional approach in mind, as are those that are often pursued in the West in the context of more general epistemological and metaphysical programs

    How to Hintikkize a Frege

    Get PDF
    The paper deals with the main contribution of the Finnish logician Jaakko Hintikka: epistemic logic, in particular the 'static' version of the system based on the formal analysis of the concepts of knowledge and belief. I propose to take a different look at this philosophical logic and to consider it from the opposite point of view of the philosophy of logic. At first, two theories of meaning are described and associated with two competing theories of linguistic competence. In a second step, I draw the conclusion that Hintikka's epistemic logic constitutes a sort of internalisation of meaning, by the introduction of epistemic modal operators into an object language. In this respect, to view meaning as the result of a linguistic competence makes epistemic logic nothing less than a logic of unified meaning and understanding

    Semantic universals and typology

    Get PDF

    Language, logic and ontology: uncovering the structure of commonsense knowledge

    Get PDF
    The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic ‘discovery’ of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra

    Does the Principle of Compositionality Explain Productivity? For a Pluralist View of the Role of Formal Languages as Models

    Get PDF
    One of the main motivations for having a compositional semantics is the account of the productivity of natural languages. Formal languages are often part of the account of productivity, i.e., of how beings with finite capaci- ties are able to produce and understand a potentially infinite number of sen- tences, by offering a model of this process. This account of productivity con- sists in the generation of proofs in a formal system, that is taken to represent the way speakers grasp the meaning of an indefinite number of sentences. The informational basis is restricted to what is represented in the lexicon. This constraint is considered as a requirement for the account of productivity, or at least of an important feature of productivity, namely, that we can grasp auto- matically the meaning of a huge number of complex expressions, far beyond what can be memorized. However, empirical results in psycholinguistics, and especially particular patterns of ERP, show that the brain integrates informa- tion of different sources very fast, without any felt effort on the part of the speaker. This shows that formal procedures do not explain productivity. How- ever, formal models are still useful in the account of how we get at the seman- tic value of a complex expression, once we have the meanings of its parts, even if there is no formal explanation of how we get at those meanings. A practice-oriented view of modeling gives an adequate interpretation of this re- sult: formal compositional semantics may be a useful model for some ex- planatory purposes concerning natural languages, without being a good model for dealing with other explananda
    • …
    corecore