1,119 research outputs found
A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention
The Web has become the main platform where people express their opinions
about entities of interest and their associated aspects. Aspect-Based Sentiment
Analysis (ABSA) aims to automatically compute the sentiment towards these
aspects from opinionated text. In this paper we extend the state-of-the-art
Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) method in two
directions. First we replace the non-contextual word embeddings with deep
contextual word embeddings in order to better cope with the word semantics in a
given text. Second, we use hierarchical attention by adding an extra attention
layer to the HAABSA high-level representations in order to increase the method
flexibility in modeling the input data. Using two standard datasets (SemEval
2015 and SemEval 2016) we show that the proposed extensions improve the
accuracy of the built model for ABSA.Comment: Accepted for publication in the 20th International Conference on Web
Engineering (ICWE 2020), Helsinki Finland, 9-12 June 202
Does BERT understand sentiment? Leveraging comparisons between contextual and non-contextual embeddings to improve aspect-based sentiment models
When performing Polarity Detection for different words in a sentence, we need to look at the words around to understand the sentiment. Massively pretrained language models like BERT can encode not only just the words in a document but also the context around the words along with them. This begs the questions, "Does a pretrain language model also automatically encode sentiment information about each word?" and "Can it be used to infer polarity towards different aspects?". In this work we try to answer this question by showing that training a comparison of a contextual embedding from BERT and a generic word embedding can be used to infer sentiment. We also show that if we finetune a subset of weights the model built on comparison of BERT and generic word embedding, it can get state of the art results for Polarity Detection in Aspect Based Sentiment Classification datasets
Does BERT Understand Sentiment? Leveraging Comparisons Between Contextual and Non-Contextual Embeddings to Improve Aspect-Based Sentiment Models
When performing Polarity Detection for different words in a sentence, we need
to look at the words around to understand the sentiment. Massively pretrained
language models like BERT can encode not only just the words in a document but
also the context around the words along with them. This begs the questions,
"Does a pretrain language model also automatically encode sentiment information
about each word?" and "Can it be used to infer polarity towards different
aspects?". In this work we try to answer this question by showing that training
a comparison of a contextual embedding from BERT and a generic word embedding
can be used to infer sentiment. We also show that if we finetune a subset of
weights the model built on comparison of BERT and generic word embedding, it
can get state of the art results for Polarity Detection in Aspect Based
Sentiment Classification datasets
A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
Sentiment analysis (SA) has become an essential component of natural language processing (NLP) with numerous practical applications to understanding “what other people think”. Various techniques have been developed to tackle SA using deep learning (DL); however, current research lacks comprehensive strategies incorporating multiple-word embeddings. This study proposes a self-attention mechanism that leverages DL and involves the contextual integration of word embedding with a time-dispersed bidirectional gated recurrent unit (Bi-GRU). This work employs word embedding approaches GloVe, word2vec, and fastText to achieve better predictive capabilities. By integrating these techniques, the study aims to improve the classifier’s capability to precisely analyze and categorize sentiments in textual data from the domain of movies. The investigation seeks to enhance the classifier’s performance in NLP tasks by addressing the challenges of underfitting and overfitting in DL. To evaluate the model’s effectiveness, an openly available IMDb dataset was utilized, achieving a remarkable testing accuracy of 99.70%
- …