Existing works on Aspect Sentiment Triplet Extraction (ASTE) explicitly focus
on developing more efficient fine-tuning techniques for the task. Instead, our
motivation is to come up with a generic approach that can improve the
downstream performances of multiple ABSA tasks simultaneously. Towards this, we
present CONTRASTE, a novel pre-training strategy using CONTRastive learning to
enhance the ASTE performance. While we primarily focus on ASTE, we also
demonstrate the advantage of our proposed technique on other ABSA tasks such as
ACOS, TASD, and AESC. Given a sentence and its associated (aspect, opinion,
sentiment) triplets, first, we design aspect-based prompts with corresponding
sentiments masked. We then (pre)train an encoder-decoder model by applying
contrastive learning on the decoder-generated aspect-aware sentiment
representations of the masked terms. For fine-tuning the model weights thus
obtained, we then propose a novel multi-task approach where the base
encoder-decoder model is combined with two complementary modules, a
tagging-based Opinion Term Detector, and a regression-based Triplet Count
Estimator. Exhaustive experiments on four benchmark datasets and a detailed
ablation study establish the importance of each of our proposed components as
we achieve new state-of-the-art ASTE results.Comment: Accepted as a Long Paper at EMNLP 2023 (Findings); 16 pages; Codes:
https://github.com/nitkannen/CONTRASTE