9 research outputs found

    Montague Grammar Induction

    Get PDF
    We propose a computational model for inducing full-fledged combinatory categorial grammars from behavioral data. This model contrasts with prior computational models of selection in representing syntactic and semantic types as structured (rather than atomic) objects, enabling direct interpretation of the modeling results relative to standard formal frameworks. We investigate the grammar our model induces when fit to a lexicon-scale acceptability judgment dataset – Mega Acceptability – focusing in particular on the types our model assigns to clausal complements and the predicates that select them

    The end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.Publisher PDFPeer reviewe

    Gradient morphophonology: Evidence from Uyghur vowel harmony

    Get PDF
    For the Structuralists and early Generativists (e.g. Bloomfield 1933; Chomsky & Halle 1968), all grammatical knowledge was by definition discrete and categorical. Since phonetic patterns are gradient, early work argued that phonetics was extra-grammatical. However, a significant body of work has since shown that phonetic patterns are language-specific and must constitute part of a speaker's knowledge about their language (e.g. Keating 1985). As a result, linguistic knowledge is not ontologically categorical. For other areas of the grammar, though, much work continues to assume that linguistic knowledge is categorical. In this paper, I investigate the categoricality of phonological patterns using acoustic vowel harmony data from Uyghur.  By comparing subphonemic variation in Uyghur with attested patterns of phonetic reduction and interpolation, I demonstrate that gradience is not always derivable from phonetic forces. On these grounds I argue that vowel harmony, and by extension phonology, may be gradient.  Furthermore, the claim that gradience plays a larger role in linguistic representations is also supported by a number of descriptive works, which suggest that gradient harmony may occur in a wide range of languages. Building on experimental, and typological evidence, I thus contend that gradience isn't restricted to phonetics, but pervades both the phonological and phonetic modules of the grammar

    Linguistic Competence and New Empiricism in Philosophy and Science

    Get PDF
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic competence in this framework was regarded as being innate, rule-governed, domain-specific, and fundamentally different from performance, i.e., idiosyncrasies and factors governing linguistic behavior. I analyze state-of-the-art connectionist, deep learning models of natural language processing, most notably large language models, to see what they can tell us about linguistic competence. Deep learning is a statistical technique for the classification of patterns through which artificial intelligence researchers train artificial neural networks containing multiple layers that crunch a gargantuan amount of textual and/or visual data. I argue that these models suggest that linguistic competence should be construed as stochastic, pattern-based, and stemming from domain-general mechanisms. Moreover, I distinguish syntactic from semantic competence, and I show for each the ramifications of the endorsement of a connectionist research program as opposed to the traditional symbolic cognitive science and transformational-generative grammar. I provide a unifying front, consisting of usage-based theories, a construction grammar approach, and an embodied approach to cognition to show that the more multimodal and diverse models are in terms of architectural features and training data, the stronger the case is for the connectionist linguistic competence. I also propose to discard the competence vs. performance distinction as theoretically inferior so that a novel and integrative account of linguistic competence originating in connectionism and empiricism that I propose and defend in the dissertation could be put forward in scientific and philosophical literature

    A case for deep learning in semantics: Response to Pater

    No full text
    corecore