3,425 research outputs found
Lifelong Learning CRF for Supervised Aspect Extraction
This paper makes a focused contribution to supervised aspect extraction. It
shows that if the system has performed aspect extraction from many past domains
and retained their results as knowledge, Conditional Random Fields (CRF) can
leverage this knowledge in a lifelong learning manner to extract in a new
domain markedly better than the traditional CRF without using this prior
knowledge. The key innovation is that even after CRF training, the model can
still improve its extraction with experiences in its applications.Comment: Accepted at ACL 2017. arXiv admin note: text overlap with
arXiv:1612.0794
A Survey of Cross-Lingual Sentiment Analysis Based on Pre-Trained Models
With the technology development of natural language processing, many researchers have studied Machine Learning (ML), Deep Learning (DL), monolingual Sentiment Analysis (SA) widely. However, there is not much work on Cross-Lingual SA (CLSA), although it is beneficial when dealing with low resource languages (e.g., Tamil, Malayalam, Hindi, and Arabic). This paper surveys the main challenges and issues of CLSA based on some pre-trained language models and mentions the leading methods to cope with CLSA. In particular, we compare and analyze their pros and cons. Moreover, we summarize the valuable cross-lingual resources and point out the main problems researchers need to solve in the future
AELA-DLSTMs: Attention-enabled and location-aware double LSTMs for aspect-level sentiment classification
Aspect-level sentiment classification, as a fine-grained task in sentiment classification, aiming to extract sentiment polarity from opinions towards a specific aspect word, has been made tremendous improvements in recent years. There are three key factors for aspect-level sentiment classification: contextual semantic information towards aspect words, correlations between aspect words and their context words, and location information of context words with regard to aspect words. In this paper, two models named AE-DLSTMs (Attention-Enabled Double LSTMs) and AELA-DLSTMs (Attention-Enabled and Location-Aware Double LSTMs) are proposed for aspect-level sentiment classification. AE-DLSTMs takes full advantage of the DLSTMs (Double LSTMs) which can capture the contextual semantic information in both forward and backward directions towards aspect words. Meanwhile, a novel attention weights generating method that combines aspect words with their contextual semantic information is designed so that those weights can make better use of the correlations between aspect words and their context words. Besides, we observe that context words with different distances or different directions towards aspect words have different contributions in sentiment polarity. Based on AE-DLSTMs, the location information of context words by assigning different weights is incorporated in AELA-DLSTMs to improve the accuracy. Experiments are conducted on two English datasets and one Chinese dataset. The experimental results have confirmed that our models can make remarkable improvements and outperform all the baseline models in all datasets, improving the accuracy of 1.67 percent to 4.77 percent in different datasets compared with baseline models
- …