Aspect Sentiment Triplet Extraction (ASTE) is a burgeoning subtask of
fine-grained sentiment analysis, aiming to extract structured sentiment
triplets from unstructured textual data. Existing approaches to ASTE often
complicate the task with additional structures or external data. In this
research, we propose a novel tagging scheme and employ a contrastive learning
approach to mitigate these challenges. The proposed approach demonstrates
comparable or superior performance in comparison to state-of-the-art
techniques, while featuring a more compact design and reduced computational
overhead. Notably, even in the era of Large Language Models (LLMs), our method
exhibits superior efficacy compared to GPT 3.5 and GPT 4 in a few-shot learning
scenarios. This study also provides valuable insights for the advancement of
ASTE techniques within the paradigm of large language models