633 research outputs found

    Syntax-Informed Interactive Model for Comprehensive Aspect-Based Sentiment Analysis

    Full text link
    Aspect-based sentiment analysis (ABSA), a nuanced task in text analysis, seeks to discern sentiment orientation linked to specific aspect terms in text. Traditional approaches often overlook or inadequately model the explicit syntactic structures of sentences, crucial for effective aspect term identification and sentiment determination. Addressing this gap, we introduce an innovative model: Syntactic Dependency Enhanced Multi-Task Interaction Architecture (SDEMTIA) for comprehensive ABSA. Our approach innovatively exploits syntactic knowledge (dependency relations and types) using a specialized Syntactic Dependency Embedded Interactive Network (SDEIN). We also incorporate a novel and efficient message-passing mechanism within a multi-task learning framework to bolster learning efficacy. Our extensive experiments on benchmark datasets showcase our model's superiority, significantly surpassing existing methods. Additionally, incorporating BERT as an auxiliary feature extractor further enhances our model's performance

    On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training

    Full text link
    Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind the social media texts or reviews, which has been a fundamental application to the real-world society. Since the early 2010s, ABSA has achieved extraordinarily high accuracy with various deep neural models. However, existing ABSA models with strong in-house performances may fail to generalize to some challenging cases where the contexts are variable, i.e., low robustness to real-world environments. In this study, we propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training. First, we strengthen the current best-robust syntax-aware models by further incorporating the rich external syntactic dependencies and the labels with aspect simultaneously with a universal-syntax graph convolutional network. In the corpus perspective, we propose to automatically induce high-quality synthetic training data with various types, allowing models to learn sufficient inductive bias for better robustness. Last, we based on the rich pseudo data perform adversarial training to enhance the resistance to the context perturbation and meanwhile employ contrastive learning to reinforce the representations of instances with contrastive sentiments. Extensive robustness evaluations are conducted. The results demonstrate that our enhanced syntax-aware model achieves better robustness performances than all the state-of-the-art baselines. By additionally incorporating our synthetic corpus, the robust testing results are pushed with around 10% accuracy, which are then further improved by installing the advanced training strategies. In-depth analyses are presented for revealing the factors influencing the ABSA robustness.Comment: Accepted in ACM Transactions on Information System

    Bridging the Gap in Multilingual Semantic Role Labeling: A Language-Agnostic Approach

    Get PDF
    Recent research indicates that taking advantage of complex syntactic features leads to favorable results in Semantic Role Labeling. Nonetheless, an analysis of the latest state-of-the-art multilingual systems reveals the difficulty of bridging the wide gap in performance between high-resource (e.g., English) and low-resource (e.g., German) settings. To overcome this issue, we propose a fully language-agnostic model that does away with morphological and syntactic features to achieve robustness across languages. Our approach outperforms the state of the art in all the languages of the CoNLL-2009 benchmark dataset, especially whenever a scarce amount of training data is available. Our objective is not to reject approaches that rely on syntax, rather to set a strong and consistent language-independent baseline for future innovations in Semantic Role Labeling. We release our model code and checkpoints at https://github.com/SapienzaNLP/multi-srl

    Syntactic Fusion: Enhancing Aspect-Level Sentiment Analysis Through Multi-Tree Graph Integration

    Full text link
    Recent progress in aspect-level sentiment classification has been propelled by the incorporation of graph neural networks (GNNs) leveraging syntactic structures, particularly dependency trees. Nevertheless, the performance of these models is often hampered by the innate inaccuracies of parsing algorithms. To mitigate this challenge, we introduce SynthFusion, an innovative graph ensemble method that amalgamates predictions from multiple parsers. This strategy blends diverse dependency relations prior to the application of GNNs, enhancing robustness against parsing errors while avoiding extra computational burdens. SynthFusion circumvents the pitfalls of overparameterization and diminishes the risk of overfitting, prevalent in models with stacked GNN layers, by optimizing graph connectivity. Our empirical evaluations on the SemEval14 and Twitter14 datasets affirm that SynthFusion not only outshines models reliant on single dependency trees but also eclipses alternative ensemble techniques, achieving this without an escalation in model complexity
    • …
    corecore