369 research outputs found

    A hybrid representation based simile component extraction

    Get PDF
    Simile, a special type of metaphor, can help people to express their ideas more clearly. Simile component extraction is to extract tenors and vehicles from sentences. This task has a realistic significance since it is useful for building cognitive knowledge base. With the development of deep neural networks, researchers begin to apply neural models to component extraction. Simile components should be in cross-domain. According to our observations, words in cross-domain always have different concepts. Thus, concept is important when identifying whether two words are simile components or not. However, existing models do not integrate concept into their models. It is difficult for these models to identify the concept of a word. What’s more, corpus about simile component extraction is limited. There are a number of rare words or unseen words, and the representations of these words are always not proper enough. Exiting models can hardly extract simile components accurately when there are low-frequency words in sentences. To solve these problems, we propose a hybrid representation-based component extraction (HRCE) model. Each word in HRCE is represented in three different levels: word level, concept level and character level. Concept representations (representations in concept level) can help HRCE to identify the words in cross-domain more accurately. Moreover, with the help of character representations (representations in character levels), HRCE can represent the meaning of a word more properly since words are consisted of characters and these characters can partly represent the meaning of words. We conduct experiments to compare the performance between HRCE and existing models. The experiment results show that HRCE significantly outperforms current models

    A Unified Model for Opinion Target Extraction and Target Sentiment Prediction

    Full text link
    Target-based sentiment analysis involves opinion target extraction and target sentiment classification. However, most of the existing works usually studied one of these two sub-tasks alone, which hinders their practical use. This paper aims to solve the complete task of target-based sentiment analysis in an end-to-end fashion, and presents a novel unified model which applies a unified tagging scheme. Our framework involves two stacked recurrent neural networks: The upper one predicts the unified tags to produce the final output results of the primary target-based sentiment analysis; The lower one performs an auxiliary target boundary prediction aiming at guiding the upper network to improve the performance of the primary task. To explore the inter-task dependency, we propose to explicitly model the constrained transitions from target boundaries to target sentiment polarities. We also propose to maintain the sentiment consistency within an opinion target via a gate mechanism which models the relation between the features for the current word and the previous word. We conduct extensive experiments on three benchmark datasets and our framework achieves consistently superior results.Comment: AAAI 201

    A novel deep learning architecture for drug named entity recognition

    Get PDF
    Drug named entity recognition (DNER) becomes the prerequisite of other medical relation extraction systems. Existing approaches to automatically recognize drug names includes rule-based, machine learning (ML) and deep learning (DL) techniques. DL techniques have been verified to be the state-of-the-art as it is independent of handcrafted features. The previous DL methods based on word embedding input representation uses the same vector representation for an entity irrespective of its context in different sentences and hence could not capture the context properly. Also, identification of the n-gram entity is a challenge. In this paper, a novel architecture is proposed that includes a sentence embedding layer that works on the entire sentence to efficiently capture the context of an entity. A hybrid model that comprises a stacked bidirectional long short-term memory (Bi-LSTM) with residual LSTM has been designed to overcome the limitations and to upgrade the performance of the model. We have contrasted the achievement of our proposed approach with other DNER models and the percentage of improvements of the proposed model over LSTM-conditional random field (CRF), LIU and WBI with respect to micro-average F1-score are 11.17, 8.8 and 17.64 respectively. The proposed model has also shown promising result in recognizing 2- and 3-gram entities
    • …
    corecore