58,228 research outputs found
Aspect-oriented Opinion Alignment Network for Aspect-Based Sentiment Classification
Aspect-based sentiment classification is a crucial problem in fine-grained
sentiment analysis, which aims to predict the sentiment polarity of the given
aspect according to its context. Previous works have made remarkable progress
in leveraging attention mechanism to extract opinion words for different
aspects. However, a persistent challenge is the effective management of
semantic mismatches, which stem from attention mechanisms that fall short in
adequately aligning opinions words with their corresponding aspect in
multi-aspect sentences. To address this issue, we propose a novel
Aspect-oriented Opinion Alignment Network (AOAN) to capture the contextual
association between opinion words and the corresponding aspect. Specifically,
we first introduce a neighboring span enhanced module which highlights various
compositions of neighboring words and given aspects. In addition, we design a
multi-perspective attention mechanism that align relevant opinion information
with respect to the given aspect. Extensive experiments on three benchmark
datasets demonstrate that our model achieves state-of-the-art results. The
source code is available at https://github.com/AONE-NLP/ABSA-AOAN.Comment: 8 pages, 5 figure, ECAI 202
Deep Memory Networks for Attitude Identification
We consider the task of identifying attitudes towards a given set of entities
from text. Conventionally, this task is decomposed into two separate subtasks:
target detection that identifies whether each entity is mentioned in the text,
either explicitly or implicitly, and polarity classification that classifies
the exact sentiment towards an identified entity (the target) into positive,
negative, or neutral.
Instead, we show that attitude identification can be solved with an
end-to-end machine learning architecture, in which the two subtasks are
interleaved by a deep memory network. In this way, signals produced in target
detection provide clues for polarity classification, and reversely, the
predicted polarity provides feedback to the identification of targets.
Moreover, the treatments for the set of targets also influence each other --
the learned representations may share the same semantics for some targets but
vary for others. The proposed deep memory network, the AttNet, outperforms
methods that do not consider the interactions between the subtasks or those
among the targets, including conventional machine learning methods and the
state-of-the-art deep learning models.Comment: Accepted to WSDM'1
Attentional Encoder Network for Targeted Sentiment Classification
Targeted sentiment classification aims at determining the sentimental
tendency towards specific targets. Most of the previous approaches model
context and target words with RNN and attention. However, RNNs are difficult to
parallelize and truncated backpropagation through time brings difficulty in
remembering long-term patterns. To address this issue, this paper proposes an
Attentional Encoder Network (AEN) which eschews recurrence and employs
attention based encoders for the modeling between context and target. We raise
the label unreliability issue and introduce label smoothing regularization. We
also apply pre-trained BERT to this task and obtain new state-of-the-art
results. Experiments and analysis demonstrate the effectiveness and lightweight
of our model.Comment: 7 page
A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews
Despite the recent advances in opinion mining for written reviews, few works
have tackled the problem on other sources of reviews. In light of this issue,
we propose a multi-modal approach for mining fine-grained opinions from video
reviews that is able to determine the aspects of the item under review that are
being discussed and the sentiment orientation towards them. Our approach works
at the sentence level without the need for time annotations and uses features
derived from the audio, video and language transcriptions of its contents. We
evaluate our approach on two datasets and show that leveraging the video and
audio modalities consistently provides increased performance over text-only
baselines, providing evidence these extra modalities are key in better
understanding video reviews.Comment: Second Grand Challenge and Workshop on Multimodal Language ACL 202
- …