4,828 research outputs found
ATP: A holistic attention integrated approach to enhance ABSA
Aspect based sentiment analysis (ABSA) deals with the identification of the
sentiment polarity of a review sentence towards a given aspect. Deep Learning
sequential models like RNN, LSTM, and GRU are current state-of-the-art methods
for inferring the sentiment polarity. These methods work well to capture the
contextual relationship between the words of a review sentence. However, these
methods are insignificant in capturing long-term dependencies. Attention
mechanism plays a significant role by focusing only on the most crucial part of
the sentence. In the case of ABSA, aspect position plays a vital role. Words
near to aspect contribute more while determining the sentiment towards the
aspect. Therefore, we propose a method that captures the position based
information using dependency parsing tree and helps attention mechanism. Using
this type of position information over a simple word-distance-based position
enhances the deep learning model's performance. We performed the experiments on
SemEval'14 dataset to demonstrate the effect of dependency parsing
relation-based attention for ABSA
ν ν° λ¨μ λΆλ₯λͺ¨λΈμ μν μ€μ ν ν° ν¬μ°© λ° μνμ€ μΈμ½λ μ€κ³ λ°©λ²
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2022. 8. μ κ΅λ―Ό.With the development of internet, a great of volume of data have accumulated over time.
Therefore, dealing long sequential data can become a core problem in web services.
For example, streaming services such as YouTube, Netflx and Tictoc have used the user's viewing history sequence to recommend videos that users may like.
Such systems have replaced the user's viewed video with each item or token to predict what item or token will be viewed next.
These tasks have been defined as Token-Level Classification (TLC) tasks.
Given the sequence of tokens, TLC identifies the labels of tokens in the required portion of this sequence. As mentioned above, TLC can be applied to various recommendation Systems.
In addition, most of Natural Language Processing (NLP) tasks can also be formulated as TLC problem.
For example, sentence and each word within the sentence can be expressed as token-level sequence.
In particular, in the case of information extraction, it can be changed to a TLC task that distinguishes whether a specific word span in the sentence is information.
The characteristics of TLC datasets are that they are very sparse and long.
Therefore, it is a very important problem to extract only important information from the sequences and properly encode them.
In this thesis, we propose the method to solve the two academic questions of TLC in Recommendation Systems and information extraction: 1) How to capture important tokens from the token sequence and 2) How to encode a token sequence into model.
As deep neural networks (DNNs) have shown outstanding performance in various web application tasks, we design the RNN and Transformer-based model for recommendation systems, and information extractions.
In this dissertation, we propose novel models that can extract important tokens for recommendation systems and information extraction systems.
In recommendation systems, we design a BART-based system that can capture important portion of token sequence through self-attention mechanisms and consider both bidirectional and left-to-right directional information.
In information systems, we present relation network-based models to focus important parts such as opinion target and neighbor words.μΈν°λ·μ λ°λ¬λ‘, λ§μ μμ λ°μ΄ν°κ° μκ°μ΄ μ§λ¨μ λ°λΌ μΆμ λμλ€.
μ΄λ‘μΈν΄ κΈ΄ μμ°¨μ λ°μ΄ν°λ₯Ό μ²λ¦¬νλ κ²μ μΉ μλΉμ€μ ν΅μ¬ λ¬Έμ κ° λμλ€.
μλ₯Ό λ€μ΄, μ νλΈ, λ·νλ¦μ€, ν±ν‘κ³Ό κ°μ μ€νΈλ¦¬λ° μλΉμ€λ μ¬μ©μμ μμ² κΈ°λ‘ μνμ€λ₯Ό μ¬μ©νμ¬ μ¬μ©μκ° μ’μν λ§ν λΉλμ€λ₯Ό μΆμ²νλ€.
μ΄λ¬ν μμ€ν
μ λ€μμ μ΄λ€ νλͺ©μ΄λ ν ν°μ λ³Ό κ²μΈμ§λ₯Ό μμΈ‘νκΈ° μν΄ μ¬μ©μκ° λ³Έ λΉλμ€λ₯Ό κ° νλͺ© λλ ν ν°μΌλ‘ λ체νμ¬ μ¬μ©ν μ μλ€.
μ΄λ¬ν μμ
μ ν ν° μμ€ λΆλ₯(TLC) μμ
μΌλ‘ μ μνλ€.
ν ν° μνμ€κ° μ£Όμ΄μ§λ©΄, TLCλ μ΄ μνμ€μ νμν λΆλΆμμ ν ν°μ λΌλ²¨μ μλ³νλ€.
μ΄λ κ²μ κ°μ΄, TLCλ λ€μν μΆμ² μμ€ν
μ μ μ©λ μ μλ€.
λν λλΆλΆμ μμ°μ΄ μ²λ¦¬(NLP) μμ
μ TLC λ¬Έμ λ‘ κ³΅μνλ μ μλ€.
μλ₯Ό λ€μ΄, λ¬Έμ₯κ³Ό λ¬Έμ₯ λ΄μ κ° λ¨μ΄λ ν ν° λ 벨 μνμ€λ‘ ννλ μ μλ€.
νΉν μ 보 μΆμΆμ κ²½μ° λ¬Έμ₯μ νΉμ λ¨μ΄ κ°κ²©μ΄ μ 보μΈμ§ μ¬λΆλ₯Ό ꡬλΆνλ TLC μμ
μΌλ‘ λ°λ μ μλ€.
TLC λ°μ΄ν° μΈνΈμ νΉμ§μ λ§€μ° ν¬λ°(Sparse)νκ³ κΈΈλ€λ κ²μ΄λ€.
λ°λΌμ μνμ€μμ μ€μν μ λ³΄λ§ μΆμΆνμ¬ μ μ ν μΈμ½λ©νλ κ²μ λ§€μ° μ€μν λ¬Έμ μ΄λ€.
λ³Έ λ
Όλ¬Έμμλ κΆμ₯ μμ€ν
κ³Ό μ 보 μΆμΆμμ TLCμ λ κ°μ§ νλ¬Έμ μ§λ¬Έ- 1) ν ν° μνμ€μμ μ€μν ν ν°μ μΊ‘μ²νλ λ°©λ² λ° 2) ν ν° μνμ€λ₯Ό λͺ¨λΈλ‘ μΈμ½λ©νλ λ°©λ² μ ν΄κ²°νλ λ°©λ²μ μ μνλ€.
μ¬μΈ΅ μ κ²½λ§(DNN)μ΄ λ€μν μΉ μ ν리μΌμ΄μ
μμ
μμ λ°μ΄λ μ±λ₯μ λ³΄μ¬ μκΈ° λλ¬Έμ μΆμ² μμ€ν
λ° μ 보 μΆμΆμ μν RNN λ° νΈλμ€ν¬λ¨Έ κΈ°λ° λͺ¨λΈμ μ€κ³νλ€.
λ¨Όμ μ°λ¦¬λ μκΈ° μ£Όμ λ©μ»€λμ¦μ ν΅ν΄ ν ν° μνμ€μ μ€μν λΆλΆμ ν¬μ°©νκ³ μλ°©ν₯ λ° μ’μ° λ°©ν₯ μ 보λ₯Ό λͺ¨λ κ³ λ €ν μ μλ BART κΈ°λ° μΆμ² μμ€ν
μ μ€κ³νλ€.
μ 보 μμ€ν
μμ, μ°λ¦¬λ μ견 λμκ³Ό μ΄μ λ¨μ΄μ κ°μ μ€μν λΆλΆμ μ΄μ μ λ§μΆκΈ° μν΄ κ΄κ³ λ€νΈμν¬ κΈ°λ° λͺ¨λΈμ μ μνλ€.1. Introduction 1
2. Token-level Classification in Recommendation Systems 8
2.1 Overview 8
2.2 Hierarchical RNN-based Recommendation Systems 19
2.3 Entangled Bidirectional Encoder to Auto-regressive Decoder for Sequential Recommendation 27
3. Token-level Classification in Information Extraction 39
3.1 Overview 39
3.2 RABERT: Relation-Aware BERT for Target-Oriented Opinion Words Extraction 49
3.3 Gated Relational Target-aware Encoder and Local Context-aware Decoder for Target-oriented Opinion Words Extraction 58
4. Conclusion 79λ°
Context-Guided BERT for Targeted Aspect-Based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) and Targeted ASBA (TABSA) allow
finer-grained inferences about sentiment to be drawn from the same text,
depending on context. For example, a given text can have different targets
(e.g., neighborhoods) and different aspects (e.g., price or safety), with
different sentiment associated with each target-aspect pair. In this paper, we
investigate whether adding context to self-attention models improves
performance on (T)ABSA. We propose two variants of Context-Guided BERT
(CG-BERT) that learn to distribute attention under different contexts. We first
adapt a context-aware Transformer to produce a CG-BERT that uses context-guided
softmax-attention. Next, we propose an improved Quasi-Attention CG-BERT model
that learns a compositional attention that supports subtractive attention. We
train both models with pretrained BERT on two (T)ABSA datasets: SentiHood and
SemEval-2014 (Task 4). Both models achieve new state-of-the-art results with
our QACG-BERT model having the best performance. Furthermore, we provide
analyses of the impact of context in the our proposed models. Our work provides
more evidence for the utility of adding context-dependencies to pretrained
self-attention-based language models for context-based natural language tasks
Aspect-oriented Opinion Alignment Network for Aspect-Based Sentiment Classification
Aspect-based sentiment classification is a crucial problem in fine-grained
sentiment analysis, which aims to predict the sentiment polarity of the given
aspect according to its context. Previous works have made remarkable progress
in leveraging attention mechanism to extract opinion words for different
aspects. However, a persistent challenge is the effective management of
semantic mismatches, which stem from attention mechanisms that fall short in
adequately aligning opinions words with their corresponding aspect in
multi-aspect sentences. To address this issue, we propose a novel
Aspect-oriented Opinion Alignment Network (AOAN) to capture the contextual
association between opinion words and the corresponding aspect. Specifically,
we first introduce a neighboring span enhanced module which highlights various
compositions of neighboring words and given aspects. In addition, we design a
multi-perspective attention mechanism that align relevant opinion information
with respect to the given aspect. Extensive experiments on three benchmark
datasets demonstrate that our model achieves state-of-the-art results. The
source code is available at https://github.com/AONE-NLP/ABSA-AOAN.Comment: 8 pages, 5 figure, ECAI 202
- β¦