19,380 research outputs found

    Empirical co-occurrence rate networks for sequence labeling

    Get PDF
    Sequence labeling has wide applications in many areas. For example, most of named entity recog- nition tasks, which extract named entities or events from unstructured data, can be formalized as sequence labeling problems. Sequence labeling has been studied extensively in different commu- nities, such as data mining, natural language processing or machine learning. Many powerful and popular models have been developed, such as hidden Markov models (HMMs) [4], conditional Markov models (CMMs) [3], and conditional random fields (CRFs) [2]. Despite their successes, they suffer from some known problems: (i) HMMs are generative models which suffer from the mismatch problem, and also it is difficult to incorporate overlapping, non-independent features into a HMM explicitly. (ii) CMMs suffer from the label bias problem; (iii) CRFs overcome the problems of HMMs and CMMs, but the global normalization of CRFs can be very expensive. This prevents CRFs from being applied to big datasets (e.g. Tweets).\ud In this paper, we propose the empirical Co-occurrence Rate Networks (ECRNs) [5] for sequence la- beling. CRNs avoid the problems of the existing models mentioned above. To make the training of CRNs as efficient as possible, we simply use the empirical distribution as the parameter estimation. This results in the ECRNs which can be trained orders of magnitude faster and still obtain compet- itive accuracy to the existing models. ECRN has been applied as a component to the University of Twente system [1] for concept extraction challenge at #MSM2013, which won the best challenge submission awards. ECRNs can be very useful for practitioners on big data

    Knowledge Base Population using Semantic Label Propagation

    Get PDF
    A crucial aspect of a knowledge base population system that extracts new facts from text corpora, is the generation of training data for its relation extractors. In this paper, we present a method that maximizes the effectiveness of newly trained relation extractors at a minimal annotation cost. Manual labeling can be significantly reduced by Distant Supervision, which is a method to construct training data automatically by aligning a large text corpus with an existing knowledge base of known facts. For example, all sentences mentioning both 'Barack Obama' and 'US' may serve as positive training instances for the relation born_in(subject,object). However, distant supervision typically results in a highly noisy training set: many training sentences do not really express the intended relation. We propose to combine distant supervision with minimal manual supervision in a technique called feature labeling, to eliminate noise from the large and noisy initial training set, resulting in a significant increase of precision. We further improve on this approach by introducing the Semantic Label Propagation method, which uses the similarity between low-dimensional representations of candidate training instances, to extend the training set in order to increase recall while maintaining high precision. Our proposed strategy for generating training data is studied and evaluated on an established test collection designed for knowledge base population tasks. The experimental results show that the Semantic Label Propagation strategy leads to substantial performance gains when compared to existing approaches, while requiring an almost negligible manual annotation effort.Comment: Submitted to Knowledge Based Systems, special issue on Knowledge Bases for Natural Language Processin

    DivGraphPointer: A Graph Pointer Network for Extracting Diverse Keyphrases

    Full text link
    Keyphrase extraction from documents is useful to a variety of applications such as information retrieval and document summarization. This paper presents an end-to-end method called DivGraphPointer for extracting a set of diversified keyphrases from a document. DivGraphPointer combines the advantages of traditional graph-based ranking methods and recent neural network-based approaches. Specifically, given a document, a word graph is constructed from the document based on word proximity and is encoded with graph convolutional networks, which effectively capture document-level word salience by modeling long-range dependency between words in the document and aggregating multiple appearances of identical words into one node. Furthermore, we propose a diversified point network to generate a set of diverse keyphrases out of the word graph in the decoding process. Experimental results on five benchmark data sets show that our proposed method significantly outperforms the existing state-of-the-art approaches.Comment: Accepted to SIGIR 201

    Joint Learning of Correlated Sequence Labelling Tasks Using Bidirectional Recurrent Neural Networks

    Full text link
    The stream of words produced by Automatic Speech Recognition (ASR) systems is typically devoid of punctuations and formatting. Most natural language processing applications expect segmented and well-formatted texts as input, which is not available in ASR output. This paper proposes a novel technique of jointly modeling multiple correlated tasks such as punctuation and capitalization using bidirectional recurrent neural networks, which leads to improved performance for each of these tasks. This method could be extended for joint modeling of any other correlated sequence labeling tasks.Comment: Accepted in Interspeech 201
    • …
    corecore