27,603 research outputs found
Weakly-Supervised Neural Text Classification
Deep neural networks are gaining increasing popularity for the classic text
classification task, due to their strong expressive power and less requirement
for feature engineering. Despite such attractiveness, neural text
classification models suffer from the lack of training data in many real-world
applications. Although many semi-supervised and weakly-supervised text
classification models exist, they cannot be easily applied to deep neural
models and meanwhile support limited supervision types. In this paper, we
propose a weakly-supervised method that addresses the lack of training data in
neural text classification. Our method consists of two modules: (1) a
pseudo-document generator that leverages seed information to generate
pseudo-labeled documents for model pre-training, and (2) a self-training module
that bootstraps on real unlabeled data for model refinement. Our method has the
flexibility to handle different types of weak supervision and can be easily
integrated into existing deep neural models for text classification. We have
performed extensive experiments on three real-world datasets from different
domains. The results demonstrate that our proposed method achieves inspiring
performance without requiring excessive training data and outperforms baseline
methods significantly.Comment: CIKM 2018 Full Pape
Unsupervised User Stance Detection on Twitter
We present a highly effective unsupervised framework for detecting the stance
of prolific Twitter users with respect to controversial topics. In particular,
we use dimensionality reduction to project users onto a low-dimensional space,
followed by clustering, which allows us to find core users that are
representative of the different stances. Our framework has three major
advantages over pre-existing methods, which are based on supervised or
semi-supervised classification. First, we do not require any prior labeling of
users: instead, we create clusters, which are much easier to label manually
afterwards, e.g., in a matter of seconds or minutes instead of hours. Second,
there is no need for domain- or topic-level knowledge either to specify the
relevant stances (labels) or to conduct the actual labeling. Third, our
framework is robust in the face of data skewness, e.g., when some users or some
stances have greater representation in the data. We experiment with different
combinations of user similarity features, dataset sizes, dimensionality
reduction methods, and clustering algorithms to ascertain the most effective
and most computationally efficient combinations across three different datasets
(in English and Turkish). We further verified our results on additional tweet
sets covering six different controversial topics. Our best combination in terms
of effectiveness and efficiency uses retweeted accounts as features, UMAP for
dimensionality reduction, and Mean Shift for clustering, and yields a small
number of high-quality user clusters, typically just 2--3, with more than 98\%
purity. The resulting user clusters can be used to train downstream
classifiers. Moreover, our framework is robust to variations in the
hyper-parameter values and also with respect to random initialization
Unsupervised feature learning with discriminative encoder
In recent years, deep discriminative models have achieved extraordinary
performance on supervised learning tasks, significantly outperforming their
generative counterparts. However, their success relies on the presence of a
large amount of labeled data. How can one use the same discriminative models
for learning useful features in the absence of labels? We address this question
in this paper, by jointly modeling the distribution of data and latent features
in a manner that explicitly assigns zero probability to unobserved data. Rather
than maximizing the marginal probability of observed data, we maximize the
joint probability of the data and the latent features using a two step EM-like
procedure. To prevent the model from overfitting to our initial selection of
latent features, we use adversarial regularization. Depending on the task, we
allow the latent features to be one-hot or real-valued vectors and define a
suitable prior on the features. For instance, one-hot features correspond to
class labels and are directly used for the unsupervised and semi-supervised
classification task, whereas real-valued feature vectors are fed as input to
simple classifiers for auxiliary supervised discrimination tasks. The proposed
model, which we dub discriminative encoder (or DisCoder), is flexible in the
type of latent features that it can capture. The proposed model achieves
state-of-the-art performance on several challenging tasks.Comment: 10 pages, 4 figures, International Conference on Data Mining, 201
- …