3 research outputs found

    Situation entity annotation

    Get PDF
    This paper presents an annotation scheme for a new semantic annotation task with relevance for analysis and computation at both the clause level and the discourse level. More specifically, we label the finite clauses of texts with the type of situation entity (e.g., eventualities, statements about kinds, or statements of belief) they introduce to the discourse, following and extending work by Smith (2003). We take a feature-driven approach to annotation, with the result that each clause is also annotated with fundamental aspectual class, whether the main NP referent is specific or generic, and whether the situation evoked is episodic or habitual. This annotation is performed (so far) on three sections of the MASC corpus, with each clause labeled by at least two annotators. In this paper we present the annotation scheme, statistics of the corpus in its current version, and analyses of both inter-annotator agreement and intra-annotator consistency

    Modeling Meaning for Description and Interaction

    Get PDF
    Language is a powerful tool for communication and coordination, allowing us to share thoughts, ideas, and instructions with others. Accordingly, enabling people to communicate linguistically with digital agents has been among the longest-standing goals in artificial intelligence (AI). However, unlike humans, machines do not naturally acquire the ability to extract meaning from language. One natural solution to this problem is to represent meaning in a structured format and then develop models for processing language into such structures. Unlike natural language, these structured representations can be directly processed and interpreted by existing algorithms. Indeed, much of the digital infrastructure we have built is mediated by structured representations (e.g. programs and APIs). Furthermore, unlike the internal representations of current neural models, structured representations are built to be used and interpreted by people. I focus on methods for parsing language into these dually-interpretable representations of meaning. I introduce models that learn to predict structure from language and apply them to a variety of tasks, ranging from linguistic description to interaction with robots and digital assistants. I address three thematic challenges in modeling meaning: abstraction, sensitivity, and ambiguity. In order to be useful, meaning representations must abstract away from the linguistic input. Abstractions differ for each representation used, and must be learned by the model. The process of abstraction entails a kind of invariance: different linguistic inputs mapping to the same meaning. At the same time, meaning is sensitive to slight changes in the linguistic input; here, similar inputs might map to very different meanings. Finally, language is often ambiguous, and many utterances have multiple meanings. In cases of ambiguity, models of meaning must learn that the same input can map to different meanings
    corecore