317 research outputs found

    Attentional Encoder Network for Targeted Sentiment Classification

    Full text link
    Targeted sentiment classification aims at determining the sentimental tendency towards specific targets. Most of the previous approaches model context and target words with RNN and attention. However, RNNs are difficult to parallelize and truncated backpropagation through time brings difficulty in remembering long-term patterns. To address this issue, this paper proposes an Attentional Encoder Network (AEN) which eschews recurrence and employs attention based encoders for the modeling between context and target. We raise the label unreliability issue and introduce label smoothing regularization. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of our model.Comment: 7 page

    Modeling Multi-Targets Sentiment Classification via Graph Convolutional Networks and Auxiliary Relation

    Get PDF
    Existing solutions do not work well when multi-targets coexist in a sentence. The reason is that the existing solution is usually to separate multiple targets and process them separately. If the original sentence has N target, the original sentence will be repeated for N times, and only one target will be processed each time. To some extent, this approach degenerates the fine-grained sentiment classification task into the sentencelevel sentiment classification task, and the research method of processing the target separately ignores the internal relation and interaction between the targets. Based on the above considerations, we proposes to use Graph Convolutional Network (GCN) to model and process multi-targets appearing in sentences at the same time based on the positional relationship, and then to construct a graph of the sentiment relationship between targets based on the difference of the sentiment polarity between target words. In addition to the standard target-dependent sentiment classification task, an auxiliary node relation classification task is constructed. Experiments demonstrate that our model achieves new comparable performance on the benchmark datasets: SemEval-2014 Task 4, i.e., reviews for restaurants and laptops. Furthermore, the method of dividing the target words into isolated individuals has disadvantages, and the multi-task learning model is beneficial to enhance the feature extraction ability and expression ability of the model

    Joint Learning of Local and Global Features for Aspect-based Sentiment Classification

    Full text link
    Aspect-based sentiment classification (ASC) aims to judge the sentiment polarity conveyed by the given aspect term in a sentence. The sentiment polarity is not only determined by the local context but also related to the words far away from the given aspect term. Most recent efforts related to the attention-based models can not sufficiently distinguish which words they should pay more attention to in some cases. Meanwhile, graph-based models are coming into ASC to encode syntactic dependency tree information. But these models do not fully leverage syntactic dependency trees as they neglect to incorporate dependency relation tag information into representation learning effectively. In this paper, we address these problems by effectively modeling the local and global features. Firstly, we design a local encoder containing: a Gaussian mask layer and a covariance self-attention layer. The Gaussian mask layer tends to adjust the receptive field around aspect terms adaptively to deemphasize the effects of unrelated words and pay more attention to local information. The covariance self-attention layer can distinguish the attention weights of different words more obviously. Furthermore, we propose a dual-level graph attention network as a global encoder by fully employing dependency tag information to capture long-distance information effectively. Our model achieves state-of-the-art performance on both SemEval 2014 and Twitter datasets.Comment: under revie
    • …
    corecore