45,229 research outputs found

    Research On Text Classification Based On Deep Neural Network

    Get PDF
    Text classification is one of the classic tasks in the field of natural language processing. The goal is to identify the category to which the text belongs. Text categorization is widely used in email detection, sentiment analysis, topic marking and other fields. However, good text representation is the key to improve the performance of natural language processing tasks such as text classification. Traditional text representation adopts bag-of-words model or vector space model, which not only loses the context information of the text, but also faces the problems of high latitude and high sparsity. In recent years, with the increase of data and the improvement of computing performance, the use of deep learning technology to represent and classify texts has attracted great attention. Convolutional neural network, recurrent neural network and recurrent neural network with attention mechanism are used to represent the text, and then to classify the text and other natural language processing tasks, all of which have better performance than the traditional methods. In this paper, we design two sentence-level text representation and classification models based on the deep network. The details are as follows: (1) Text representation and classification model based on bidirectional cyclic and convolutional neural networks-BRCNN. Brcnn's input is the word vector corresponding to each word in the sentence; After using cyclic neural network to extract word order information in sentences, convolution neural network is used to extract higher-level features of sentences. After convolution, the maximum pool operation is used to obtain sentence vectors. At last, softmax classifier is used for classification. Cyclic neural network can capture the word order information in sentences, while convolutional neural network can extract useful features. Experiments on eight text classification tasks show that BRCNN model can get better text feature representation, and the classification accuracy rate is equal to or higher than that of the prior art.. (2) A text representation and classification model based on attention mechanism and convolutional neural network-ACNN. ACNN model uses the recurrent neural network with attention mechanism to obtain the context vector; Then convolution neural network is used to extract more advanced feature information. The maximum pool operation is adopted to obtain a sentence vector; At last, the softmax classifier is used to classify the text. Experiments on eight text classification benchmark data sets show that ACNN improves the stability of model convergence, and can converge to an optimal or local optimal solution better than BRCNN

    Generating Sentences Using a Dynamic Canvas

    Get PDF
    We introduce the Attentive Unsupervised Text (W)riter (AUTR), which is a word level generative model for natural language. It uses a recurrent neural network with a dynamic attention and canvas memory mechanism to iteratively construct sentences. By viewing the state of the memory at intermediate stages and where the model is placing its attention, we gain insight into how it constructs sentences. We demonstrate that AUTR learns a meaningful latent representation for each sentence, and achieves competitive log-likelihood lower bounds whilst being computationally efficient. It is effective at generating and reconstructing sentences, as well as imputing missing words.Comment: AAAI 201

    End-to-End Attention-based Large Vocabulary Speech Recognition

    Full text link
    Many of the current state-of-the-art Large Vocabulary Continuous Speech Recognition Systems (LVCSR) are hybrids of neural networks and Hidden Markov Models (HMMs). Most of these systems contain separate components that deal with the acoustic modelling, language modelling and sequence decoding. We investigate a more direct approach in which the HMM is replaced with a Recurrent Neural Network (RNN) that performs sequence prediction directly at the character level. Alignment between the input features and the desired character sequence is learned automatically by an attention mechanism built into the RNN. For each predicted character, the attention mechanism scans the input sequence and chooses relevant frames. We propose two methods to speed up this operation: limiting the scan to a subset of most promising frames and pooling over time the information contained in neighboring frames, thereby reducing source sequence length. Integrating an n-gram language model into the decoding process yields recognition accuracies similar to other HMM-free RNN-based approaches

    Author profiling with bidirectional RNNs using attention with GRUs : notebook for PAN at CLEF 2017

    Get PDF
    This paper describes our approach for the Author Profiling Shared Task at PAN 2017. The goal was to classify the gender and language variety of a Twitter user solely by their tweets. Author Profiling can be applied in various fields like marketing, security and forensics. Twitter already uses similar techniques to deliver personalized advertisement for their users. PAN 2017 provided a corpus for this purpose in the languages: English, Spanish, Portuguese and Arabic. To solve the problem we used a deep learning approach, which has shown recent success in Natural Language Processing. Our submitted model consists of a bidirectional Recurrent Neural Network implemented with a Gated Recurrent Unit (GRU) combined with an Attention Mechanism. We achieved an average accuracy over all languages of 75,31% in gender classification and 85,22% in language variety classification
    • …
    corecore