14,461 research outputs found

    Natural Language Understanding: Methodological Conceptualization

    Get PDF
    This article contains the results of a theoretical analysis of the phenomenon of natural language understanding (NLU), as a methodological problem. The combination of structural-ontological and informational-psychological approaches provided an opportunity to describe the subject matter field of NLU, as a composite function of the mind, which systemically combines the verbal and discursive structural layers. In particular, the idea of NLU is presented, on the one hand, as the relation between the discourse of a specific speech message and the meta-discourse of a language, in turn, activated by the need-motivational factors. On the other hand, it is conceptualized as a process with a specific structure of information metabolism, the study of which implies the necessity to differentiate the affective (emotional) and need-motivational influences on the NLU, as well as to take into account their interaction. At the same time, the hypothesis about the influence of needs on NLU under the scenario similar to the pattern of Yerkes-Dodson is argued. And the theoretical conclusion that emotions fulfill the function of the operator of the structural features of the information metabolism of NLU is substantiated. Thus, depending on the modality of emotions in the process of NLU, it was proposed to distinguish two scenarios for the implementation of information metabolism - reduction and synthetic. The argument in favor of the conclusion about the productive and constitutive role of emotions in the process of NLU is also given

    Do Multi-Sense Embeddings Improve Natural Language Understanding?

    Full text link
    Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while `multi-sense' methods have been proposed and tested on artificial word-similarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multi-sense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications

    Natural language understanding: instructions for (Present and Future) use

    Get PDF
    In this paper I look at Natural Language Understanding, an area of Natural Language Processing aimed at making sense of text, through the lens of a visionary future: what do we expect a machine should be able to understand? and what are the key dimensions that require the attention of researchers to make this dream come true
    • …
    corecore