9,979 research outputs found

    Sentiment Classification Considering Negation and Contrast Transition

    Get PDF
    PACLIC 23 / City University of Hong Kong / 3-5 December 200

    Conventional and structure based sentiment analysis: a survey

    Get PDF

    A Knowledge-Based Model for Polarity Shifters

    Full text link
    [EN] Polarity shifting can be considered one of the most challenging problems in the context of Sentiment Analysis. Polarity shifters, also known as contextual valence shifters (Polanyi and Zaenen 2004), are treated as linguistic contextual items that can increase, reduce or neutralise the prior polarity of a word called focus included in an opinion. The automatic detection of such items enhances the performance and accuracy of computational systems for opinion mining, but this challenge remains open, mainly for languages other than English. From a symbolic approach, we aim to advance in the automatic processing of the polarity shifters that affect the opinions expressed on tweets, both in English and Spanish. To this end, we describe a novel knowledge-based model to deal with three dimensions of contextual shifters: negation, quantification, and modality (or irrealis).This work is part of the project grant PID2020-112827GB-I00, funded by MCIN/AEI/10.13039/501100011033, and the SMARTLAGOON project [101017861], funded by Horizon 2020 - European Union Framework Programme for Research and Innovation.Blázquez-López, Y. (2022). A Knowledge-Based Model for Polarity Shifters. Journal of Computer-Assisted Linguistic Research. 6:87-107. https://doi.org/10.4995/jclr.2022.1880787107

    Detecting and Explaining Crisis

    Full text link
    Individuals on social media may reveal themselves to be in various states of crisis (e.g. suicide, self-harm, abuse, or eating disorders). Detecting crisis from social media text automatically and accurately can have profound consequences. However, detecting a general state of crisis without explaining why has limited applications. An explanation in this context is a coherent, concise subset of the text that rationalizes the crisis detection. We explore several methods to detect and explain crisis using a combination of neural and non-neural techniques. We evaluate these techniques on a unique data set obtained from Koko, an anonymous emotional support network available through various messaging applications. We annotate a small subset of the samples labeled with crisis with corresponding explanations. Our best technique significantly outperforms the baseline for detection and explanation.Comment: Accepted at CLPsych, ACL workshop. 8 pages, 5 figure

    Interpretable Word-Level Sentiment Analysis With Attention-Based Multiple Instance Classification Models

    Get PDF
    In this study, our main objective is to tackle the black-box nature of popular machine learning models in sentiment analysis and enhance model interpretability. We aim to gain more insight into the decision-making process of sentiment analysis models, which is often obscure in those complex models. To achieve this goal, we introduce two word-level sentiment analysis models. The first model is called the attention-based multiple instance classification (AMIC) model. It combines the transparent model structure of multiple instance classification and the self-attention mechanism in deep learning to incorporate the contextual information from documents. As demonstrated by a wine review dataset application, AMIC can achieve state-of-the-art performance compared to a number of machine learning methods, while providing much improved interpretability. The second model, AMIC 2.0, improves AMIC in two key aspects. Notably, AMIC is limited in integrating positional information in text because it ignores the order of words in documents. AMIC 2.0 comes up with a novel approach to incorporate relative positional information in the self-attention mechanism, enabling the model to capture more accurate sentiment that is position-sensitive. This modification enables the model to better understand how word order and proximity influence sentiment expressions. Secondly, AMIC 2.0 takes a step further by decomposing the sentiment score in AMIC into a context-independent score and a context-dependent score. This decomposition, along with the incorporation of two sentiment shifters linking these scores in a global environment and a local environment of text respectively, elucidate how context of document influences sentiment of words, leading to more interpretable results in sentiment analysis. The utility of AMIC 2.0 is demonstrated by an application to a Twitter dataset. AMIC 2.0 has improved the overall performance of AMIC, with the additional capability of handling more intricate language subtleties, such as different types of negations. Both AMIC and AMIC 2.0 are trained without having to use pre-trained sentiment word dictionary or seeded sentiment words. Compared to some other big language models, their computation cost is relatively low and they are versatile to use conventional datasets to generate domain-specific sentiment dictionary and provide interpretable sentiment analysis results
    corecore