110,521 research outputs found
InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
In this paper, we present InstructABSA, Aspect Based Sentiment Analysis
(ABSA) using the instruction learning paradigm for all ABSA subtasks: Aspect
Term Extraction (ATE), Aspect Term Sentiment Classification (ATSC), and Joint
Task modeling. Our method introduces positive, negative, and neutral examples
to each training sample, and instruction tunes the model (Tk-Instruct) for each
ABSA subtask, yielding significant performance improvements. Experimental
results on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA
outperforms the previous state-of-the-art (SOTA) approaches on all three ABSA
subtasks (ATE, ATSC, and Joint Task) by a significant margin, outperforming 7x
larger models. In particular, InstructABSA surpasses the SOTA on the Rest14 ATE
subtask by 7.31% points, Rest15 ATSC subtask by and on the Lapt14 Joint Task by
8.63% points. Our results also suggest a strong generalization ability to new
domains across all three subtasksComment: 4 pages, 2 figures, 5 tables, 5 appendix page
Towards Autoencoding Variational Inference for Aspect-based Opinion Summary
Aspect-based Opinion Summary (AOS), consisting of aspect discovery and
sentiment classification steps, has recently been emerging as one of the most
crucial data mining tasks in e-commerce systems. Along this direction, the
LDA-based model is considered as a notably suitable approach, since this model
offers both topic modeling and sentiment classification. However, unlike
traditional topic modeling, in the context of aspect discovery it is often
required some initial seed words, whose prior knowledge is not easy to be
incorporated into LDA models. Moreover, LDA approaches rely on sampling
methods, which need to load the whole corpus into memory, making them hardly
scalable. In this research, we study an alternative approach for AOS problem,
based on Autoencoding Variational Inference (AVI). Firstly, we introduce the
Autoencoding Variational Inference for Aspect Discovery (AVIAD) model, which
extends the previous work of Autoencoding Variational Inference for Topic
Models (AVITM) to embed prior knowledge of seed words. This work includes
enhancement of the previous AVI architecture and also modification of the loss
function. Ultimately, we present the Autoencoding Variational Inference for
Joint Sentiment/Topic (AVIJST) model. In this model, we substantially extend
the AVI model to support the JST model, which performs topic modeling for
corresponding sentiment. The experimental results show that our proposed models
enjoy higher topic coherent, faster convergence time and better accuracy on
sentiment classification, as compared to their LDA-based counterparts.Comment: 20 pages, 11 figure
Explainable and Accurate Natural Language Understanding for Voice Assistants and Beyond
Joint intent detection and slot filling, which is also termed as joint NLU
(Natural Language Understanding) is invaluable for smart voice assistants.
Recent advancements in this area have been heavily focusing on improving
accuracy using various techniques. Explainability is undoubtedly an important
aspect for deep learning-based models including joint NLU models. Without
explainability, their decisions are opaque to the outside world and hence, have
tendency to lack user trust. Therefore to bridge this gap, we transform the
full joint NLU model to be `inherently' explainable at granular levels without
compromising on accuracy. Further, as we enable the full joint NLU model
explainable, we show that our extension can be successfully used in other
general classification tasks. We demonstrate this using sentiment analysis and
named entity recognition.Comment: Accepted at CIKM 202
Inferring sentiment-based priors in topic models
© Springer International Publishing Switzerland 2015. Over the recent years, several topic models have appeared that are specifically tailored for sentiment analysis, including the Joint Sentiment/Topic model, Aspect and Sentiment Unification Model, and User-Sentiment Topic Model. Most of these models incorporate sentiment knowledge in the β priors; however, these priors are usually set from a dictionary and completely rely on previous domain knowledge to identify positive and negative words. In this work, we show a new approach to automatically infer sentiment-based β priors in topic models for sentiment analysis and opinion mining; the approach is based on the EM algorithm. We show that this method leads to significant improvements for sentiment analysis in known topic models and also can be used to update sentiment dictionaries with new positive and negative words
- …