32 research outputs found
Dataless text classification with descriptive LDA
Manually labeling documents for training a text classifier is expensive and time-consuming. Moreover, a classifier trained on labeled documents may suffer from overfitting and adaptability problems. Dataless text classification (DLTC) has been proposed as a solution to these problems, since it does not require labeled documents. Previous research in DLTC has used explicit semantic analysis of Wikipedia content to measure semantic distance between documents, which is in turn used to classify test documents based on nearest neighbours. The semantic-based DLTC method has a major drawback in that it relies on a large-scale, finely-compiled semantic knowledge base, which is difficult to obtain in many scenarios. In this paper we propose a novel kind of model, descriptive LDA (DescLDA), which performs DLTC with only category description words and unlabeled documents. In DescLDA, the LDA model is assembled with a describing device to infer Dirichlet priors from prior descriptive documents created with category description words. The Dirichlet priors are then used by LDA to induce category-aware latent topics from unlabeled documents. Experimental results with the 20Newsgroups and RCV1 datasets show that: (1) our DLTC method is more effective than the semantic-based DLTC baseline method; and (2) the accuracy of our DLTC method is very close to state-of-the-art supervised text classification methods. As neither external knowledge resources nor labeled documents are required, our DLTC method is applicable to a wider range of scenarios
Weakly-Supervised Neural Text Classification
Deep neural networks are gaining increasing popularity for the classic text
classification task, due to their strong expressive power and less requirement
for feature engineering. Despite such attractiveness, neural text
classification models suffer from the lack of training data in many real-world
applications. Although many semi-supervised and weakly-supervised text
classification models exist, they cannot be easily applied to deep neural
models and meanwhile support limited supervision types. In this paper, we
propose a weakly-supervised method that addresses the lack of training data in
neural text classification. Our method consists of two modules: (1) a
pseudo-document generator that leverages seed information to generate
pseudo-labeled documents for model pre-training, and (2) a self-training module
that bootstraps on real unlabeled data for model refinement. Our method has the
flexibility to handle different types of weak supervision and can be easily
integrated into existing deep neural models for text classification. We have
performed extensive experiments on three real-world datasets from different
domains. The results demonstrate that our proposed method achieves inspiring
performance without requiring excessive training data and outperforms baseline
methods significantly.Comment: CIKM 2018 Full Pape
Evaluating Unsupervised Text Classification: Zero-shot and Similarity-based Approaches
Text classification of unseen classes is a challenging Natural Language
Processing task and is mainly attempted using two different types of
approaches. Similarity-based approaches attempt to classify instances based on
similarities between text document representations and class description
representations. Zero-shot text classification approaches aim to generalize
knowledge gained from a training task by assigning appropriate labels of
unknown classes to text documents. Although existing studies have already
investigated individual approaches to these categories, the experiments in
literature do not provide a consistent comparison. This paper addresses this
gap by conducting a systematic evaluation of different similarity-based and
zero-shot approaches for text classification of unseen classes. Different
state-of-the-art approaches are benchmarked on four text classification
datasets, including a new dataset from the medical domain. Additionally, novel
SimCSE and SBERT-based baselines are proposed, as other baselines used in
existing work yield weak classification results and are easily outperformed.
Finally, the novel similarity-based Lbl2TransformerVec approach is presented,
which outperforms previous state-of-the-art approaches in unsupervised text
classification. Our experiments show that similarity-based approaches
significantly outperform zero-shot approaches in most cases. Additionally,
using SimCSE or SBERT embeddings instead of simpler text representations
increases similarity-based classification results even further.Comment: Accepted to 6th International Conference on Natural Language
Processing and Information Retrieval (NLPIR '22
Short Text Categorization using World Knowledge
The content of the World Wide Web is drastically multiplying, and thus the amount of available online text data is increasing every day.
Today, many users contribute to this massive global network via online platforms by sharing information in the form of a short text. Such an immense amount of data covers subjects from all the existing domains (e.g., Sports, Economy, Biology, etc.). Further, manually processing such data is beyond human capabilities. As a result, Natural Language Processing (NLP) tasks, which aim to automatically analyze and process natural language documents have gained significant attention. Among these tasks, due to its application in various domains, text categorization has become one of the most fundamental and crucial tasks.
However, the standard text categorization models face major challenges while performing short text categorization, due to the unique characteristics of short texts, i.e., insufficient text length, sparsity, ambiguity, etc. In other words, the conventional approaches provide substandard performance, when they are directly applied to the short text categorization task. Furthermore, in the case of short text, the standard feature extraction techniques such as bag-of-words suffer from limited contextual information. Hence, it is essential to enhance the text representations with an external knowledge source. Moreover, the traditional models require a significant amount of manually labeled data and obtaining labeled data is a costly and time-consuming task. Therefore, although recently proposed supervised methods, especially, deep neural network approaches have demonstrated notable performance, the requirement of the labeled data remains the main bottleneck of these approaches.
In this thesis, we investigate the main research question of how to perform \textit{short text categorization} effectively \textit{without requiring any labeled data} using knowledge bases as an external source. In this regard, novel short text categorization models, namely, Knowledge-Based Short Text Categorization (KBSTC) and Weakly Supervised Short Text Categorization using World Knowledge (WESSTEC) have been introduced and evaluated in this thesis. The models do not require any hand-labeled data to perform short text categorization, instead, they leverage the semantic similarity between the short texts and the predefined categories. To quantify such semantic similarity, the low dimensional representation of entities and categories have been learned by exploiting a large knowledge base. To achieve that a novel entity and category embedding model has also been proposed in this thesis. The extensive experiments have been conducted to assess the performance of the proposed short text categorization models and the embedding model on several standard benchmark datasets
MotifClass: Weakly Supervised Text Classification with Higher-order Metadata Information
We study the problem of weakly supervised text classification, which aims to
classify text documents into a set of pre-defined categories with category
surface names only and without any annotated training document provided. Most
existing classifiers leverage textual information in each document. However, in
many domains, documents are accompanied by various types of metadata (e.g.,
authors, venue, and year of a research paper). These metadata and their
combinations may serve as strong category indicators in addition to textual
contents. In this paper, we explore the potential of using metadata to help
weakly supervised text classification. To be specific, we model the
relationships between documents and metadata via a heterogeneous information
network. To effectively capture higher-order structures in the network, we use
motifs to describe metadata combinations. We propose a novel framework, named
MotifClass, which (1) selects category-indicative motif instances, (2)
retrieves and generates pseudo-labeled training samples based on category names
and indicative motif instances, and (3) trains a text classifier using the
pseudo training data. Extensive experiments on real-world datasets demonstrate
the superior performance of MotifClass to existing weakly supervised text
classification approaches. Further analysis shows the benefit of considering
higher-order metadata information in our framework.Comment: 11 pages; Accepted to WSDM 202