10 research outputs found

    Analysis of User’s Opinion using Deep Neural Network Techniques

    Get PDF
    Through many research and discoveries it has been widely accepted that aspect-level sentiment classification is achieved effectively by using Long Short-Term Memory (LSTM) network combined with attention mechanism and memory module. As existing approaches widely depend on the modeling of semantic relatedness of an aspect, at the same time we ignore their syntactic dependencies which are already a part of that sentence. This will result in undesirably an aspect on textual words that are descriptive of other aspects. So, in this paper, to offer syntax free contexts as well as they should be aspect specific, so we propose a proximity-weighted convolution network. To be more precise, we have one way of determining proximity weight which is dependency proximity. The construction of the model includes bidirectional LSTM architecture along with a proximity-weighted convolution neural network

    Analysis of User’s Opinion using Deep Neural Network Techniques

    Get PDF
    Through many research and discoveries it has been widely accepted that aspect-level sentiment classification is achieved effectively by using Long Short-Term Memory (LSTM) network combined with attention mechanism and memory module. As existing approaches widely depend on the modeling of semantic relatedness of an aspect, at the same time we ignore their syntactic dependencies which are already a part of that sentence. This will result in undesirably an aspect on textual words that are descriptive of other aspects. So, in this paper, to offer syntax free contexts as well as they should be aspect specific, so we propose a proximity-weighted convolution network. To be more precise, we have one way of determining proximity weight which is dependency proximity. The construction of the model includes bidirectional LSTM architecture along with a proximity-weighted convolution neural network

    An End-to-End Multi-Task Learning to Link Framework for Emotion-Cause Pair Extraction

    Full text link
    Emotion-cause pair extraction (ECPE), as an emergent natural language processing task, aims at jointly investigating emotions and their underlying causes in documents. It extends the previous emotion cause extraction (ECE) task, yet without requiring a set of pre-given emotion clauses as in ECE. Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes. Such pipeline method, while intuitive, suffers from two critical issues, including error propagation across stages that may hinder the effectiveness, and high computational cost that would limit the practical application of the method. To tackle these issues, we propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner. Specifically, our model regards pair extraction as a link prediction task, and learns to link from emotion clauses to cause clauses, i.e., the links are directional. Emotion extraction and cause extraction are incorporated into the model as auxiliary tasks, which further boost the pair extraction. Experiments are conducted on an ECPE benchmarking dataset. The results show that our proposed model outperforms a range of state-of-the-art approaches.Comment: 7 pages, 3 figures, 5 table

    Syntax-aware Hybrid prompt model for Few-shot multi-modal sentiment analysis

    Full text link
    Multimodal Sentiment Analysis (MSA) has been a popular topic in natural language processing nowadays, at both sentence and aspect level. However, the existing approaches almost require large-size labeled datasets, which bring about large consumption of time and resources. Therefore, it is practical to explore the method for few-shot sentiment analysis in cross-modalities. Previous works generally execute on textual modality, using the prompt-based methods, mainly two types: hand-crafted prompts and learnable prompts. The existing approach in few-shot multi-modality sentiment analysis task has utilized both methods, separately. We further design a hybrid pattern that can combine one or more fixed hand-crafted prompts and learnable prompts and utilize the attention mechanisms to optimize the prompt encoder. The experiments on both sentence-level and aspect-level datasets prove that we get a significant outperformance

    The impact of indirect machine translation on sentiment classification

    Get PDF
    Sentiment classification has been crucial for many natural language processing (NLP) applications, such as the analysis of movie reviews, tweets, or customer feedback. A sufficiently large amount of data is required to build a robust sentiment classification system. However, such resources are not always available for all domains or for all languages. In this work, we propose employing a machine translation (MT) system to translate customer feedback into another language to investigate in which cases translated sentences can have a positive or negative impact on an automatic sentiment classifier. Furthermore, as performing a direct translation is not always possible, we explore the performance of automatic classifiers on sentences that have been translated using a pivot MT system. We conduct several experiments using the above approaches to analyse the performance of our proposed sentiment classification system and discuss the advantages and drawbacks of classifying translated sentences

    Syntax-aware aspect-level sentiment classification with proximity-weighted convolution network

    No full text
    It has been widely accepted that Long Short-Term Memory (LSTM) network, coupled with attention mechanism and memory module, is useful for aspect-level sentiment classification. However, existing approaches largely rely on the modelling of semantic relatedness of an aspect with its context words, while to some extent ignore their syntactic dependencies within sentences. Consequently, this may lead to an undesirable result that the aspect attends on contextual words that are descriptive of other aspects. In this paper, we propose a proximity-weighted convolution network to offer an aspect-specific syntax-aware representation of contexts. In particular, two ways of determining proximity weight are explored, namely position proximity and dependency proximity. The representation is primarily abstracted by a bidirectional LSTM architecture and further enhanced by a proximity-weighted convolution. Experiments conducted on the SemEval 2014 benchmark demonstrate the effectiveness of our proposed approach compared with a range of state-of-the-art models
    corecore