12 research outputs found

    Open System Categorical Quantum Semantics in Natural Language Processing

    Get PDF
    Originally inspired by categorical quantum mechanics (Abramsky and Coecke, LiCS'04), the categorical compositional distributional model of natural language meaning of Coecke, Sadrzadeh and Clark provides a conceptually motivated procedure to compute the meaning of a sentence, given its grammatical structure within a Lambek pregroup and a vectorial representation of the meaning of its parts. The predictions of this first model have outperformed that of other models in mainstream empirical language processing tasks on large scale data. Moreover, just like CQM allows for varying the model in which we interpret quantum axioms, one can also vary the model in which we interpret word meaning. In this paper we show that further developments in categorical quantum mechanics are relevant to natural language processing too. Firstly, Selinger's CPM-construction allows for explicitly taking into account lexical ambiguity and distinguishing between the two inherently different notions of homonymy and polysemy. In terms of the model in which we interpret word meaning, this means a passage from the vector space model to density matrices. Despite this change of model, standard empirical methods for comparing meanings can be easily adopted, which we demonstrate by a small-scale experiment on real-world data. This experiment moreover provides preliminary evidence of the validity of our proposed new model for word meaning. Secondly, commutative classical structures as well as their non-commutative counterparts that arise in the image of the CPM-construction allow for encoding relative pronouns, verbs and adjectives, and finally, iteration of the CPM-construction, something that has no counterpart in the quantum realm, enables one to accommodate both entailment and ambiguity

    A Corpus-based Toy Model for DisCoCat

    Get PDF
    The categorical compositional distributional (DisCoCat) model of meaning rigorously connects distributional semantics and pregroup grammars, and has found a variety of applications in computational linguistics. From a more abstract standpoint, the DisCoCat paradigm predicates the construction of a mapping from syntax to categorical semantics. In this work we present a concrete construction of one such mapping, from a toy model of syntax for corpora annotated with constituent structure trees, to categorical semantics taking place in a category of free R-semimodules over an involutive commutative semiring R.Comment: In Proceedings SLPCS 2016, arXiv:1608.0101

    Translating and Evolving: Towards a Model of Language Change in DisCoCat

    Get PDF
    The categorical compositional distributional (DisCoCat) model of meaning developed by Coecke et al. (2010) has been successful in modeling various aspects of meaning. However, it fails to model the fact that language can change. We give an approach to DisCoCat that allows us to represent language models and translations between them, enabling us to describe translations from one language to another, or changes within the same language. We unify the product space representation given in (Coecke et al., 2010) and the functorial description in (Kartsaklis et al., 2013), in a way that allows us to view a language as a catalogue of meanings. We formalize the notion of a lexicon in DisCoCat, and define a dictionary of meanings between two lexicons. All this is done within the framework of monoidal categories. We give examples of how to apply our methods, and give a concrete suggestion for compositional translation in corpora.Comment: In Proceedings CAPNS 2018, arXiv:1811.0270

    Distributional Sentence Entailment Using Density Matrices

    Full text link
    Categorical compositional distributional model of Coecke et al. (2010) suggests a way to combine grammatical composition of the formal, type logical models with the corpus based, empirical word representations of distributional semantics. This paper contributes to the project by expanding the model to also capture entailment relations. This is achieved by extending the representations of words from points in meaning space to density operators, which are probability distributions on the subspaces of the space. A symmetric measure of similarity and an asymmetric measure of entailment is defined, where lexical entailment is measured using von Neumann entropy, the quantum variant of Kullback-Leibler divergence. Lexical entailment, combined with the composition map on word representations, provides a method to obtain entailment relations on the level of sentences. Truth theoretic and corpus-based examples are provided.Comment: 11 page

    A Bestiary of Sets and Relations

    Full text link
    Building on established literature and recent developments in the graph-theoretic characterisation of its CPM category, we provide a treatment of pure state and mixed state quantum mechanics in the category fRel of finite sets and relations. On the way, we highlight the wealth of exotic beasts that hide amongst the extensive operational and structural similarities that the theory shares with more traditional arenas of categorical quantum mechanics, such as the category fdHilb. We conclude our journey by proving that fRel is local, but not without some unexpected twists.Comment: In Proceedings QPL 2015, arXiv:1511.0118

    Analysing Ambiguous Nouns and Verbs with Quantum Contextuality Tools

    Get PDF
    Psycholinguistic research uses eye-tracking to show that polysemous words are disambiguated differently from homonymous words, and that ambiguous verbs are disambiguated differently than ambiguous nouns. Research in Compositional Distributional Semantics uses cosine distances to show that verbs are disambiguated more efficiently in the context of their subjects and objects than when on their own. These two frameworks both focus on one ambiguous word at a time and neither considers ambiguous phrases with two (or more) ambiguous words. We borrow methods and measures from Quantum Information Theory, the framework of Contextuality-by-Default and degrees of contextual influences, and work with ambiguous subject-verb and verb-object phrases of English, where both the subject/object and the verb are ambiguous. We show that differences in the processing of ambiguous verbs versus ambiguous nouns, as well as between different levels of ambiguity in homonymous versus polysemous nouns and verbs can be modelled using the averages of the degrees of their contextual influences
    corecore