15,783 research outputs found

    A set-based reasoner for the description logic \shdlssx (Extended Version)

    Full text link
    We present a \ke-based implementation of a reasoner for a decidable fragment of (stratified) set theory expressing the description logic \dlssx (\shdlssx, for short). Our application solves the main TBox and ABox reasoning problems for \shdlssx. In particular, it solves the consistency problem for \shdlssx-knowledge bases represented in set-theoretic terms, and a generalization of the \emph{Conjunctive Query Answering} problem in which conjunctive queries with variables of three sorts are admitted. The reasoner, which extends and optimizes a previous prototype for the consistency checking of \shdlssx-knowledge bases (see \cite{cilc17}), is implemented in \textsf{C++}. It supports \shdlssx-knowledge bases serialized in the OWL/XML format, and it admits also rules expressed in SWRL (Semantic Web Rule Language).Comment: arXiv admin note: text overlap with arXiv:1804.11222, arXiv:1707.07545, arXiv:1702.0309

    A Context-theoretic Framework for Compositionality in Distributional Semantics

    Full text link
    Techniques in which words are represented as vectors have proved useful in many applications in computational linguistics, however there is currently no general semantic formalism for representing meaning in terms of vectors. We present a framework for natural language semantics in which words, phrases and sentences are all represented as vectors, based on a theoretical analysis which assumes that meaning is determined by context. In the theoretical analysis, we define a corpus model as a mathematical abstraction of a text corpus. The meaning of a string of words is assumed to be a vector representing the contexts in which it occurs in the corpus model. Based on this assumption, we can show that the vector representations of words can be considered as elements of an algebra over a field. We note that in applications of vector spaces to representing meanings of words there is an underlying lattice structure; we interpret the partial ordering of the lattice as describing entailment between meanings. We also define the context-theoretic probability of a string, and, based on this and the lattice structure, a degree of entailment between strings. We relate the framework to existing methods of composing vector-based representations of meaning, and show that our approach generalises many of these, including vector addition, component-wise multiplication, and the tensor product.Comment: Submitted to Computational Linguistics on 20th January 2010 for revie

    Reasoning About Pragmatics with Neural Listeners and Speakers

    Full text link
    We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural "listener" and "speaker" models) to ground language in the world. Like inference-driven approaches to pragmatics, our model actively reasons about listener behavior when selecting utterances. For training, our approach requires only ordinary captions, annotated _without_ demonstration of the pragmatic behavior the model ultimately exhibits. In human evaluations on a referring expression game, our approach succeeds 81% of the time, compared to a 69% success rate using existing techniques

    Inflection and Derivation in a Second Language

    Get PDF

    Type-Directed Weaving of Aspects for Polymorphically Typed Functional Languages

    Get PDF
    Incorporating aspect-oriented paradigm to a polymorphically typed functional language enables the declaration of type-scoped advice, in which the effect of an aspect can be harnessed by introducing possibly polymorphic type constraints to the aspect. The amalgamation of aspect orientation and functional programming enables quick behavioral adaption of functions, clear separation of concerns and expressive type-directed programming. However, proper static weaving of aspects in polymorphic languages with a type-erasure semantics remains a challenge. In this paper, we describe a type-directed static weaving strategy, as well as its implementation, that supports static type inference and static weaving of programs written in an aspect-oriented polymorphically typed functional language, AspectFun. We show examples of type-scoped advice, identify the challenges faced with compile-time weaving in the presence of type-scoped advice, and demonstrate how various advanced aspect features can be handled by our techniques. Lastly, we prove the correctness of the static weaving strategy with respect to the operational semantics of AspectFun
    corecore