3,020 research outputs found
A Practitioners' Guide to Transfer Learning for Text Classification using Convolutional Neural Networks
Transfer Learning (TL) plays a crucial role when a given dataset has
insufficient labeled examples to train an accurate model. In such scenarios,
the knowledge accumulated within a model pre-trained on a source dataset can be
transferred to a target dataset, resulting in the improvement of the target
model. Though TL is found to be successful in the realm of image-based
applications, its impact and practical use in Natural Language Processing (NLP)
applications is still a subject of research. Due to their hierarchical
architecture, Deep Neural Networks (DNN) provide flexibility and customization
in adjusting their parameters and depth of layers, thereby forming an apt area
for exploiting the use of TL. In this paper, we report the results and
conclusions obtained from extensive empirical experiments using a Convolutional
Neural Network (CNN) and try to uncover thumb rules to ensure a successful
positive transfer. In addition, we also highlight the flawed means that could
lead to a negative transfer. We explore the transferability of various layers
and describe the effect of varying hyper-parameters on the transfer
performance. Also, we present a comparison of accuracy value and model size
against state-of-the-art methods. Finally, we derive inferences from the
empirical results and provide best practices to achieve a successful positive
transfer.Comment: 9 pages, 2 figures, accepted in SDM 201
Do Convolutional Networks need to be Deep for Text Classification ?
We study in this work the importance of depth in convolutional models for
text classification, either when character or word inputs are considered. We
show on 5 standard text classification and sentiment analysis tasks that deep
models indeed give better performances than shallow networks when the text
input is represented as a sequence of characters. However, a simple
shallow-and-wide network outperforms deep models such as DenseNet with word
inputs. Our shallow word model further establishes new state-of-the-art
performances on two datasets: Yelp Binary (95.9\%) and Yelp Full (64.9\%)
Combination of Domain Knowledge and Deep Learning for Sentiment Analysis of Short and Informal Messages on Social Media
Sentiment analysis has been emerging recently as one of the major natural
language processing (NLP) tasks in many applications. Especially, as social
media channels (e.g. social networks or forums) have become significant sources
for brands to observe user opinions about their products, this task is thus
increasingly crucial. However, when applied with real data obtained from social
media, we notice that there is a high volume of short and informal messages
posted by users on those channels. This kind of data makes the existing works
suffer from many difficulties to handle, especially ones using deep learning
approaches. In this paper, we propose an approach to handle this problem. This
work is extended from our previous work, in which we proposed to combine the
typical deep learning technique of Convolutional Neural Networks with domain
knowledge. The combination is used for acquiring additional training data
augmentation and a more reasonable loss function. In this work, we further
improve our architecture by various substantial enhancements, including
negation-based data augmentation, transfer learning for word embeddings, the
combination of word-level embeddings and character-level embeddings, and using
multitask learning technique for attaching domain knowledge rules in the
learning process. Those enhancements, specifically aiming to handle short and
informal messages, help us to enjoy significant improvement in performance once
experimenting on real datasets.Comment: A Preprint of an article accepted for publication by Inderscience in
IJCVR on September 201
Learning Robust Representations of Text
Deep neural networks have achieved remarkable results across many language
processing tasks, however these methods are highly sensitive to noise and
adversarial attacks. We present a regularization based method for limiting
network sensitivity to its inputs, inspired by ideas from computer vision, thus
learning models that are more robust. Empirical evaluation over a range of
sentiment datasets with a convolutional neural network shows that, compared to
a baseline model and the dropout method, our method achieves superior
performance over noisy inputs and out-of-domain data.Comment: 5 pages with 2 pages reference, 2 tables, 1 figur
Challenges in Representation Learning: A report on three machine learning contests
The ICML 2013 Workshop on Challenges in Representation Learning focused on
three challenges: the black box learning challenge, the facial expression
recognition challenge, and the multimodal learning challenge. We describe the
datasets created for these challenges and summarize the results of the
competitions. We provide suggestions for organizers of future challenges and
some comments on what kind of knowledge can be gained from machine learning
competitions.Comment: 8 pages, 2 figure
- …