80,411 research outputs found

    ā€œMirativityā€ does not exist: įø„dug in ā€œLhasaā€ Tibetan and other suspects

    Get PDF
    Largely through the efforts of Scott DeLancey the grammatical category ā€œmirativeā€ has gained currency in linguistics. DeLancey bases his elaboration of this category on a misunderstanding of the semantics of h. dug in ā€œLhasaā€ Tibetan. Rather than showing ā€œsurprising informationā€, linguists working on Tibetan have long described įø„dug as a sensory evidential. Much of the evidence DeLancey and Aikhenvald present for mirativity in other languages is also susceptible to explanation in terms of sensory evidence or appears close to Lazardā€™s ā€œmediativeā€ (1999) or Johansonā€™s ā€œindirectiveā€ (2000). Until an independent grammatical category for ā€œnew informationā€ is described in a way which precludes analysis in terms of sensory evidence or other well established evidential categories, mirativity should be excluded from the descriptive arsenal of linguistic analysis

    Modalities in homotopy type theory

    Full text link
    Univalent homotopy type theory (HoTT) may be seen as a language for the category of āˆž\infty-groupoids. It is being developed as a new foundation for mathematics and as an internal language for (elementary) higher toposes. We develop the theory of factorization systems, reflective subuniverses, and modalities in homotopy type theory, including their construction using a "localization" higher inductive type. This produces in particular the (nn-connected, nn-truncated) factorization system as well as internal presentations of subtoposes, through lex modalities. We also develop the semantics of these constructions

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field
    • ā€¦
    corecore