2 research outputs found

    Learning Affect with Distributional Semantic Models

    Get PDF
    The affective content of a text depends on the valence and emotion values of its words. At the same time a word distributional properties deeply influence its affective content. For instance a word may become negatively loaded because it tends to co-occur with other negative expressions. Lexical affective values are used as features in sentiment analysis systems and are typically estimated with hand-made resources (e.g. WordNet Affect), which have a limited coverage. In this paper we show how distributional semantic models can effectively be used to bootstrap emotive embeddings for Italian words and then compute affective scores with respect to eight basic emotions. We also show how these emotive scores can be used to learn the positive vs. negative valence of words and model behavioral data

    Evaluating Context Selection Strategies to Build Emotive Vector Space Models

    No full text
    In this paper we compare different context selection approaches to improve the creation of Emotive Vector Space Models (VSMs). The system is based on the results of an existing approach that showed the possibility to create and update VSMs by exploiting crowdsourcing and human annotation. Here, we introduce a method to manipulate the contexts of the VSMs under the assumption that the emotive connotation of a target word is a function of both its syntagmatic and paradigmatic association with the various emotions. To study the differences among the proposed spaces and to confirm the reliability of the system, we report on two experiments: in the first one we validated the best candidates extracted from each model, and in the second one we compared the models’ performance on a random sample of target words. Both experiments have been implemented as crowdsourcing tasks
    corecore