969 research outputs found

    I Understand What You Are Saying: Leveraging Deep Learning Techniques for Aspect Based Sentiment Analysis

    Get PDF
    Despite widespread use of online reviews in consumer purchase decision making, the potential value of online reviews in facilitating digital collaboration among product/service providers, consumers, and online retailers remains under explored. One of the significant barriers to realizing the above potential lies in the difficulty of understanding online reviews due to their sheer volume and free-text form. To promote digital collaborations, we investigate aspect based sentiment dynamics of online reviews by proposing a semi-supervised, deep learning facilitated analytical pipeline. This method leverages deep learning techniques for text representation and classification. Additionally, building on previous studies that address aspect extraction and sentiment identification in isolation, we address both aspects and sentiments analyses simultaneously. Further, this study presents a novel perspective to understanding the dynamics of aspect based sentiments by analyzing aspect based sentiment in time series. The findings of this study have significant implications with regards to digital collaborations among consumers, product/service providers and other stakeholders of online reviews

    Weakly supervised aspect extraction for domain-specific texts

    Get PDF
    Aspect extraction, identifying aspects of text segments from a pre-defined set of aspects, is one of the keystones in text understanding. It benefits numerous applications, including sentiment analysis and product review summarization. Most existing aspect extraction methods heavily rely on human-curated aspect annotations of massive text segments, thus making them expensive to be applied in specific domains. Recent attempts leveraging clustering methods can alleviate such annotation effort, but they require domain-specific knowledge and effort to further filter, aggregate, and align the clustering results to desired aspects. Therefore, in this paper, we explore to extract aspects from the domain-specific raw texts with very limited supervision – only a few user-provided seed words per each aspect. Specifically, our proposed neural model is equipped with multi-head attention and self-training. The multi-head attention is learned from the seed words to ensure that the aspect-related words in text segments are weighted higher than those unrelated ones. The self-training mechanism provides more pseudo labels in addition to limited supervision. Extensive experiments on real-world datasets demonstrate the superior performance of our proposed framework, as well as the effectiveness of both the attention module and the self-training mechanism. Case studies on the attention weights further shed lights on the interpretability of our aspect extraction results
    • …
    corecore