27 research outputs found

    Exploiting BERT for End-to-End Aspect-based Sentiment Analysis

    Full text link
    In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e.g. BERT, on the E2E-ABSA task. Specifically, we build a series of simple yet insightful neural baselines to deal with E2E-ABSA. The experimental results show that even with a simple linear classification layer, our BERT-based architecture can outperform state-of-the-art works. Besides, we also standardize the comparative study by consistently utilizing a hold-out validation dataset for model selection, which is largely ignored by previous works. Therefore, our work can serve as a BERT-based benchmark for E2E-ABSA.Comment: NUT workshop@EMNLP-IJCNLP-201

    Modeling Multi-Targets Sentiment Classification via Graph Convolutional Networks and Auxiliary Relation

    Get PDF
    Existing solutions do not work well when multi-targets coexist in a sentence. The reason is that the existing solution is usually to separate multiple targets and process them separately. If the original sentence has N target, the original sentence will be repeated for N times, and only one target will be processed each time. To some extent, this approach degenerates the fine-grained sentiment classification task into the sentencelevel sentiment classification task, and the research method of processing the target separately ignores the internal relation and interaction between the targets. Based on the above considerations, we proposes to use Graph Convolutional Network (GCN) to model and process multi-targets appearing in sentences at the same time based on the positional relationship, and then to construct a graph of the sentiment relationship between targets based on the difference of the sentiment polarity between target words. In addition to the standard target-dependent sentiment classification task, an auxiliary node relation classification task is constructed. Experiments demonstrate that our model achieves new comparable performance on the benchmark datasets: SemEval-2014 Task 4, i.e., reviews for restaurants and laptops. Furthermore, the method of dividing the target words into isolated individuals has disadvantages, and the multi-task learning model is beneficial to enhance the feature extraction ability and expression ability of the model

    Zero-shot stance detection based on cross-domain feature enhancement by contrastive learning

    Full text link
    Zero-shot stance detection is challenging because it requires detecting the stance of previously unseen targets in the inference phase. The ability to learn transferable target-invariant features is critical for zero-shot stance detection. In this work, we propose a stance detection approach that can efficiently adapt to unseen targets, the core of which is to capture target-invariant syntactic expression patterns as transferable knowledge. Specifically, we first augment the data by masking the topic words of sentences, and then feed the augmented data to an unsupervised contrastive learning module to capture transferable features. Then, to fit a specific target, we encode the raw texts as target-specific features. Finally, we adopt an attention mechanism, which combines syntactic expression patterns with target-specific features to obtain enhanced features for predicting previously unseen targets. Experiments demonstrate that our model outperforms competitive baselines on four benchmark datasets
    corecore