66,308 research outputs found

    深層学習に基づくテキスト感情分析に関する研究

    Get PDF
    Textual emotion recognition (TER) is the process of automatically identifying emotional states in textual expressions. It is a more in-depth analysis than sentiment analysis. Owing to its significant academic and commercial potential, TER has become an essential topic in the field of NLP. Over the past few years, although considerable progress has been conducted in TER, there are still some difficulties and challenges because of the nature of human emotion complexity. This thesis explores emotional information by incorporating external knowledge, learning emotion correlation, and building effective TER architectures. The main contributions of this thesis are summarized as follows: (1) To make up for the limitation of imbalanced training data, this thesis proposes a multi-stream neural network that incorporates background knowledge for text classification. To better fuse background knowledge into the basal network, different fusion strategies are employed among multi-streams. The experimental results demonstrate that, as the knowledge supplement, the background knowledge-based features can make up for the information neglected or absented in basal text classification network, especially for imbalance corpus. (2) To realize contextual emotion learning, this thesis proposes a hierarchical network with label embedding. This network hierarchically encodes the given sentence based on its contextual information. Besides, an auxiliary label embedding matrix is trained for emotion correlation learning with an assembled training objective, contributing to final emotion correlation-based prediction. The experimental results show that the proposed method contributes to emotional feature learning and contextual emotion recognition. (3) To realize multi-label emotion recognition and emotion correlation learning, this thesis proposed a Multiple-label Emotion Detection Architecture (MEDA). MEDA comprises two modules: Multi-Channel Emotion-Specified Feature Extractor (MC-ESFE) and Emotion Correlation Learner (ECorL). MEDA captures underlying emotion-specified features with MC-ESFE module in advance. With underlying features, emotion correlation learning is implemented through an emotion sequence predicter in ECorL module. Furthermore, to incorporate emotion correlation information into model training, multi-label focal loss is proposed for multi-label learning. The proposed model achieved satisfactory performance and outperformed state-of-the-art models on both RenCECps and NLPCC2018 datasets, demonstrating the effectiveness of the proposed method for multi-label emotion detection

    Hyperbolic Interaction Model For Hierarchical Multi-Label Classification

    Full text link
    Different from the traditional classification tasks which assume mutual exclusion of labels, hierarchical multi-label classification (HMLC) aims to assign multiple labels to every instance with the labels organized under hierarchical relations. Besides the labels, since linguistic ontologies are intrinsic hierarchies, the conceptual relations between words can also form hierarchical structures. Thus it can be a challenge to learn mappings from word hierarchies to label hierarchies. We propose to model the word and label hierarchies by embedding them jointly in the hyperbolic space. The main reason is that the tree-likeness of the hyperbolic space matches the complexity of symbolic data with hierarchical structures. A new Hyperbolic Interaction Model (HyperIM) is designed to learn the label-aware document representations and make predictions for HMLC. Extensive experiments are conducted on three benchmark datasets. The results have demonstrated that the new model can realistically capture the complex data structures and further improve the performance for HMLC comparing with the state-of-the-art methods. To facilitate future research, our code is publicly available

    Dialogue Act Recognition via CRF-Attentive Structured Network

    Full text link
    Dialogue Act Recognition (DAR) is a challenging problem in dialogue interpretation, which aims to attach semantic labels to utterances and characterize the speaker's intention. Currently, many existing approaches formulate the DAR problem ranging from multi-classification to structured prediction, which suffer from handcrafted feature extensions and attentive contextual structural dependencies. In this paper, we consider the problem of DAR from the viewpoint of extending richer Conditional Random Field (CRF) structural dependencies without abandoning end-to-end training. We incorporate hierarchical semantic inference with memory mechanism on the utterance modeling. We then extend structured attention network to the linear-chain conditional random field layer which takes into account both contextual utterances and corresponding dialogue acts. The extensive experiments on two major benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder Dialogue Act (MRDA) datasets show that our method achieves better performance than other state-of-the-art solutions to the problem. It is a remarkable fact that our method is nearly close to the human annotator's performance on SWDA within 2% gap.Comment: 10 pages, 4figure
    corecore