1 research outputs found
Sequential Learning of Convolutional Features for Effective Text Classification
Text classification has been one of the major problems in natural language
processing. With the advent of deep learning, convolutional neural network
(CNN) has been a popular solution to this task. However, CNNs which were first
proposed for images, face many crucial challenges in the context of text
processing, namely in their elementary blocks: convolution filters and max
pooling. These challenges have largely been overlooked by the most existing CNN
models proposed for text classification. In this paper, we present an
experimental study on the fundamental blocks of CNNs in text categorization.
Based on this critique, we propose Sequential Convolutional Attentive Recurrent
Network (SCARN). The proposed SCARN model utilizes both the advantages of
recurrent and convolutional structures efficiently in comparison to previously
proposed recurrent convolutional models. We test our model on different text
classification datasets across tasks like sentiment analysis and question
classification. Extensive experiments establish that SCARN outperforms other
recurrent convolutional architectures with significantly less parameters.
Furthermore, SCARN achieves better performance compared to equally large
various deep CNN and LSTM architectures.Comment: Accepted Long Paper at EMNLP-IJCNLP 2019, Hong Kong, Chin