80,411 research outputs found
āMirativityā does not exist: įø„dug in āLhasaā Tibetan and other suspects
Largely through the efforts of Scott DeLancey the grammatical category āmirativeā has gained currency in linguistics. DeLancey bases his elaboration of this category on a misunderstanding of the semantics of h.
dug in āLhasaā Tibetan. Rather than showing āsurprising informationā, linguists working on Tibetan have long described įø„dug as a sensory evidential. Much of the evidence DeLancey and Aikhenvald present for mirativity in other languages is also susceptible to explanation in terms of sensory evidence or appears close to Lazardās āmediativeā (1999) or Johansonās āindirectiveā (2000). Until an independent grammatical category for ānew informationā is described in a way which precludes analysis in terms of sensory evidence or other well established
evidential categories, mirativity should be excluded from the descriptive arsenal of linguistic analysis
Modalities in homotopy type theory
Univalent homotopy type theory (HoTT) may be seen as a language for the
category of -groupoids. It is being developed as a new foundation for
mathematics and as an internal language for (elementary) higher toposes. We
develop the theory of factorization systems, reflective subuniverses, and
modalities in homotopy type theory, including their construction using a
"localization" higher inductive type. This produces in particular the
(-connected, -truncated) factorization system as well as internal
presentations of subtoposes, through lex modalities. We also develop the
semantics of these constructions
From Frequency to Meaning: Vector Space Models of Semantics
Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field
- ā¦