636 research outputs found
Syntactic Fusion: Enhancing Aspect-Level Sentiment Analysis Through Multi-Tree Graph Integration
Recent progress in aspect-level sentiment classification has been propelled
by the incorporation of graph neural networks (GNNs) leveraging syntactic
structures, particularly dependency trees. Nevertheless, the performance of
these models is often hampered by the innate inaccuracies of parsing
algorithms. To mitigate this challenge, we introduce SynthFusion, an innovative
graph ensemble method that amalgamates predictions from multiple parsers. This
strategy blends diverse dependency relations prior to the application of GNNs,
enhancing robustness against parsing errors while avoiding extra computational
burdens. SynthFusion circumvents the pitfalls of overparameterization and
diminishes the risk of overfitting, prevalent in models with stacked GNN
layers, by optimizing graph connectivity. Our empirical evaluations on the
SemEval14 and Twitter14 datasets affirm that SynthFusion not only outshines
models reliant on single dependency trees but also eclipses alternative
ensemble techniques, achieving this without an escalation in model complexity
Text Classification: A Review, Empirical, and Experimental Evaluation
The explosive and widespread growth of data necessitates the use of text
classification to extract crucial information from vast amounts of data.
Consequently, there has been a surge of research in both classical and deep
learning text classification methods. Despite the numerous methods proposed in
the literature, there is still a pressing need for a comprehensive and
up-to-date survey. Existing survey papers categorize algorithms for text
classification into broad classes, which can lead to the misclassification of
unrelated algorithms and incorrect assessments of their qualities and behaviors
using the same metrics. To address these limitations, our paper introduces a
novel methodological taxonomy that classifies algorithms hierarchically into
fine-grained classes and specific techniques. The taxonomy includes methodology
categories, methodology techniques, and methodology sub-techniques. Our study
is the first survey to utilize this methodological taxonomy for classifying
algorithms for text classification. Furthermore, our study also conducts
empirical evaluation and experimental comparisons and rankings of different
algorithms that employ the same specific sub-technique, different
sub-techniques within the same technique, different techniques within the same
category, and categorie
A Framework of Customer Review Analysis Using the Aspect-Based Opinion Mining Approach
Opinion mining is the branch of computation that deals with opinions,
appraisals, attitudes, and emotions of people and their different aspects. This
field has attracted substantial research interest in recent years. Aspect-level
(called aspect-based opinion mining) is often desired in practical applications
as it provides detailed opinions or sentiments about different aspects of
entities and entities themselves, which are usually required for action. Aspect
extraction and entity extraction are thus two core tasks of aspect-based
opinion mining. his paper has presented a framework of aspect-based opinion
mining based on the concept of transfer learning. on real-world customer
reviews available on the Amazon website. The model has yielded quite
satisfactory results in its task of aspect-based opinion mining.Comment: This is the accepted version of the paper that has been presented and
published in the 20th IEEE Conference, OCIT'22. The final published version
is copyright-protected by the IEEE. The paper consists of 5 pages, and it
includes 5 figures and 1 tabl
On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the
specific sentiment polarities toward certain aspects of products or services
behind the social media texts or reviews, which has been a fundamental
application to the real-world society. Since the early 2010s, ABSA has achieved
extraordinarily high accuracy with various deep neural models. However,
existing ABSA models with strong in-house performances may fail to generalize
to some challenging cases where the contexts are variable, i.e., low robustness
to real-world environments. In this study, we propose to enhance the ABSA
robustness by systematically rethinking the bottlenecks from all possible
angles, including model, data, and training. First, we strengthen the current
best-robust syntax-aware models by further incorporating the rich external
syntactic dependencies and the labels with aspect simultaneously with a
universal-syntax graph convolutional network. In the corpus perspective, we
propose to automatically induce high-quality synthetic training data with
various types, allowing models to learn sufficient inductive bias for better
robustness. Last, we based on the rich pseudo data perform adversarial training
to enhance the resistance to the context perturbation and meanwhile employ
contrastive learning to reinforce the representations of instances with
contrastive sentiments. Extensive robustness evaluations are conducted. The
results demonstrate that our enhanced syntax-aware model achieves better
robustness performances than all the state-of-the-art baselines. By
additionally incorporating our synthetic corpus, the robust testing results are
pushed with around 10% accuracy, which are then further improved by installing
the advanced training strategies. In-depth analyses are presented for revealing
the factors influencing the ABSA robustness.Comment: Accepted in ACM Transactions on Information System
BiSyn-GAT+: Bi-Syntax Aware Graph Attention Network for Aspect-based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis
task that aims to align aspects and corresponding sentiments for
aspect-specific sentiment polarity inference. It is challenging because a
sentence may contain multiple aspects or complicated (e.g., conditional,
coordinating, or adversative) relations. Recently, exploiting dependency syntax
information with graph neural networks has been the most popular trend. Despite
its success, methods that heavily rely on the dependency tree pose challenges
in accurately modeling the alignment of the aspects and their words indicative
of sentiment, since the dependency tree may provide noisy signals of unrelated
associations (e.g., the "conj" relation between "great" and "dreadful" in
Figure 2). In this paper, to alleviate this problem, we propose a Bi-Syntax
aware Graph Attention Network (BiSyn-GAT+). Specifically, BiSyn-GAT+ fully
exploits the syntax information (e.g., phrase segmentation and hierarchical
structure) of the constituent tree of a sentence to model the sentiment-aware
context of every single aspect (called intra-context) and the sentiment
relations across aspects (called inter-context) for learning. Experiments on
four benchmark datasets demonstrate that BiSyn-GAT+ outperforms the
state-of-the-art methods consistently
Syntax-Informed Interactive Model for Comprehensive Aspect-Based Sentiment Analysis
Aspect-based sentiment analysis (ABSA), a nuanced task in text analysis,
seeks to discern sentiment orientation linked to specific aspect terms in text.
Traditional approaches often overlook or inadequately model the explicit
syntactic structures of sentences, crucial for effective aspect term
identification and sentiment determination. Addressing this gap, we introduce
an innovative model: Syntactic Dependency Enhanced Multi-Task Interaction
Architecture (SDEMTIA) for comprehensive ABSA. Our approach innovatively
exploits syntactic knowledge (dependency relations and types) using a
specialized Syntactic Dependency Embedded Interactive Network (SDEIN). We also
incorporate a novel and efficient message-passing mechanism within a multi-task
learning framework to bolster learning efficacy. Our extensive experiments on
benchmark datasets showcase our model's superiority, significantly surpassing
existing methods. Additionally, incorporating BERT as an auxiliary feature
extractor further enhances our model's performance
Improving Image Captioning via Predicting Structured Concepts
Having the difficulty of solving the semantic gap between images and texts
for the image captioning task, conventional studies in this area paid some
attention to treating semantic concepts as a bridge between the two modalities
and improved captioning performance accordingly. Although promising results on
concept prediction were obtained, the aforementioned studies normally ignore
the relationship among concepts, which relies on not only objects in the image,
but also word dependencies in the text, so that offers a considerable potential
for improving the process of generating good descriptions. In this paper, we
propose a structured concept predictor (SCP) to predict concepts and their
structures, then we integrate them into captioning, so as to enhance the
contribution of visual signals in this task via concepts and further use their
relations to distinguish cross-modal semantics for better description
generation. Particularly, we design weighted graph convolutional networks
(W-GCN) to depict concept relations driven by word dependencies, and then
learns differentiated contributions from these concepts for following decoding
process. Therefore, our approach captures potential relations among concepts
and discriminatively learns different concepts, so that effectively facilitates
image captioning with inherited information across modalities. Extensive
experiments and their results demonstrate the effectiveness of our approach as
well as each proposed module in this work.Comment: Accepted by EMNLP 2023 (Main Conference, Oral
Graph Neural Networks for Natural Language Processing: A Survey
Deep learning has become the dominant approach in coping with various tasks
in Natural LanguageProcessing (NLP). Although text inputs are typically
represented as a sequence of tokens, there isa rich variety of NLP problems
that can be best expressed with a graph structure. As a result, thereis a surge
of interests in developing new deep learning techniques on graphs for a large
numberof NLP tasks. In this survey, we present a comprehensive overview onGraph
Neural Networks(GNNs) for Natural Language Processing. We propose a new
taxonomy of GNNs for NLP, whichsystematically organizes existing research of
GNNs for NLP along three axes: graph construction,graph representation
learning, and graph based encoder-decoder models. We further introducea large
number of NLP applications that are exploiting the power of GNNs and summarize
thecorresponding benchmark datasets, evaluation metrics, and open-source codes.
Finally, we discussvarious outstanding challenges for making the full use of
GNNs for NLP as well as future researchdirections. To the best of our
knowledge, this is the first comprehensive overview of Graph NeuralNetworks for
Natural Language Processing.Comment: 127 page
Improving Implicit Sentiment Learning via Local Sentiment Aggregation
Recent well-known works demonstrate encouraging progress in aspect-based
sentiment classification (ABSC), while implicit aspect sentiment modeling is
still a problem that has to be solved. Our preliminary study shows that
implicit aspect sentiments usually depend on adjacent aspects' sentiments,
which indicates we can extract implicit sentiment via local sentiment
dependency modeling. We formulate a local sentiment aggregation paradigm (LSA)
based on empirical sentiment patterns (SP) to address sentiment dependency
modeling. Compared to existing methods, LSA is an efficient approach that
learns the implicit sentiments in a local sentiment aggregation window, which
tackles the efficiency problem and avoids the token-node alignment problem of
syntax-based methods. Furthermore, we refine a differential weighting method
based on gradient descent that guides the construction of the sentiment
aggregation window. According to experimental results, LSA is effective for all
objective ABSC models, attaining state-of-the-art performance on three public
datasets. LSA is an adaptive paradigm and is ready to be adapted to existing
models, and we release the code to offer insight to improve existing ABSC
models.Comment: Source Code: https://github.com/yangheng95/PyABS
- …