161 research outputs found
CEIL: A General Classification-Enhanced Iterative Learning Framework for Text Clustering
Text clustering, as one of the most fundamental challenges in unsupervised
learning, aims at grouping semantically similar text segments without relying
on human annotations. With the rapid development of deep learning, deep
clustering has achieved significant advantages over traditional clustering
methods. Despite the effectiveness, most existing deep text clustering methods
rely heavily on representations pre-trained in general domains, which may not
be the most suitable solution for clustering in specific target domains. To
address this issue, we propose CEIL, a novel Classification-Enhanced Iterative
Learning framework for short text clustering, which aims at generally promoting
the clustering performance by introducing a classification objective to
iteratively improve feature representations. In each iteration, we first adopt
a language model to retrieve the initial text representations, from which the
clustering results are collected using our proposed Category Disentangled
Contrastive Clustering (CDCC) algorithm. After strict data filtering and
aggregation processes, samples with clean category labels are retrieved, which
serve as supervision information to update the language model with the
classification objective via a prompt learning approach. Finally, the updated
language model with improved representation ability is used to enhance
clustering in the next iteration. Extensive experiments demonstrate that the
CEIL framework significantly improves the clustering performance over
iterations, and is generally effective on various clustering algorithms.
Moreover, by incorporating CEIL on CDCC, we achieve the state-of-the-art
clustering performance on a wide range of short text clustering benchmarks
outperforming other strong baseline methods.Comment: The Web Conference 202
A Gamma-Poisson topic model for short text
Most topic models are constructed under the assumption that documents follow a multinomial distribution. The Poisson distribution is an alternative distribution to describe the probability of count data. For topic modelling, the Poisson distribution describes the number of occurrences of a word in documents of fixed length. The Poisson distribution has been successfully applied in text classification, but its application to topic modelling is not well documented, specifically in the context of a generative probabilistic model. Furthermore, the few Poisson topic models in literature are admixture models, making the assumption that a document is generated from a mixture of topics.
In this study, we focus on short text. Many studies have shown that the simpler assumption of a mixture model fits short text better. With mixture models, as opposed to admixture models, the generative assumption is that a document is generated from a single topic. One topic model, which makes this one-topic-per-document assumption, is the Dirichlet-multinomial mixture model. The main contributions of this work are a new Gamma-Poisson mixture model, as well as a collapsed Gibbs sampler for the model. The benefit of the collapsed Gibbs sampler derivation is that the model is able to automatically select the number of topics contained in the corpus. The results show that the Gamma-Poisson mixture model performs better than the Dirichlet-multinomial mixture model at selecting the number of topics in labelled corpora. Furthermore, the Gamma-Poisson mixture produces better topic coherence scores than the Dirichlet-multinomial mixture model, thus making it a viable option for the challenging task of topic modelling of short text.
The application of GPM was then extended to a further real-world task: that of distinguishing between semantically similar and dissimilar texts. The objective was to determine whether GPM could produce semantic representations that allow the user to determine the relevance of new, unseen documents to a corpus of interest. The challenge of addressing this problem in short text from small corpora was of key interest. Corpora of small size are not uncommon. For example, at the start of the Coronavirus pandemic limited research was available on the topic. Handling short text is not only challenging due to the sparsity of such text, but some corpora, such as chats between people, also tend to be noisy. The performance of GPM was compared to that of word2vec under these challenging conditions on labelled corpora. It was found that the GPM was able to produce better results based on accuracy, precision and recall in most cases. In addition, unlike word2vec, GPM was shown to be applicable on datasets that were unlabelled and a methodology for this was also presented. Finally, a relevance index metric was introduced. This relevance index translates the similarity distance between a corpus of interest and a test document to the probability of the test document to be semantically similar to the corpus of interest.Thesis (PhD (Mathematical Statistics))--University of Pretoria, 2020.StatisticsPhD (Mathematical Statistics)Unrestricte
A Survey of Source Code Search: A 3-Dimensional Perspective
(Source) code search is widely concerned by software engineering researchers
because it can improve the productivity and quality of software development.
Given a functionality requirement usually described in a natural language
sentence, a code search system can retrieve code snippets that satisfy the
requirement from a large-scale code corpus, e.g., GitHub. To realize effective
and efficient code search, many techniques have been proposed successively.
These techniques improve code search performance mainly by optimizing three
core components, including query understanding component, code understanding
component, and query-code matching component. In this paper, we provide a
3-dimensional perspective survey for code search. Specifically, we categorize
existing code search studies into query-end optimization techniques, code-end
optimization techniques, and match-end optimization techniques according to the
specific components they optimize. Considering that each end can be optimized
independently and contributes to the code search performance, we treat each end
as a dimension. Therefore, this survey is 3-dimensional in nature, and it
provides a comprehensive summary of each dimension in detail. To understand the
research trends of the three dimensions in existing code search studies, we
systematically review 68 relevant literatures. Different from existing code
search surveys that only focus on the query end or code end or introduce
various aspects shallowly (including codebase, evaluation metrics, modeling
technique, etc.), our survey provides a more nuanced analysis and review of the
evolution and development of the underlying techniques used in the three ends.
Based on a systematic review and summary of existing work, we outline several
open challenges and opportunities at the three ends that remain to be addressed
in future work.Comment: submitted to ACM Transactions on Software Engineering and Methodolog
Improving average ranking precision in user searches for biomedical research datasets
Availability of research datasets is keystone for health and life science
study reproducibility and scientific progress. Due to the heterogeneity and
complexity of these data, a main challenge to be overcome by research data
management systems is to provide users with the best answers for their search
queries. In the context of the 2016 bioCADDIE Dataset Retrieval Challenge, we
investigate a novel ranking pipeline to improve the search of datasets used in
biomedical experiments. Our system comprises a query expansion model based on
word embeddings, a similarity measure algorithm that takes into consideration
the relevance of the query terms, and a dataset categorisation method that
boosts the rank of datasets matching query constraints. The system was
evaluated using a corpus with 800k datasets and 21 annotated user queries. Our
system provides competitive results when compared to the other challenge
participants. In the official run, it achieved the highest infAP among the
participants, being +22.3% higher than the median infAP of the participant's
best submissions. Overall, it is ranked at top 2 if an aggregated metric using
the best official measures per participant is considered. The query expansion
method showed positive impact on the system's performance increasing our
baseline up to +5.0% and +3.4% for the infAP and infNDCG metrics, respectively.
Our similarity measure algorithm seems to be robust, in particular compared to
Divergence From Randomness framework, having smaller performance variations
under different training conditions. Finally, the result categorization did not
have significant impact on the system's performance. We believe that our
solution could be used to enhance biomedical dataset management systems. In
particular, the use of data driven query expansion methods could be an
alternative to the complexity of biomedical terminologies
Sparse distributed representations as word embeddings for language understanding
Word embeddings are vector representations of words that capture semantic and syntactic
similarities between them. Similar words tend to have closer vector representations in a N
dimensional space considering, for instance, Euclidean distance between the points associated
with the word vector representations in a continuous vector space. This property, makes word
embeddings valuable in several Natural Language Processing tasks, from word analogy and
similarity evaluation to the more complex text categorization, summarization or translation tasks.
Typically state of the art word embeddings are dense vector representations, with low
dimensionality varying from tens to hundreds of floating number dimensions, usually obtained
from unsupervised learning on considerable amounts of text data by training and optimizing an
objective function of a neural network.
This work presents a methodology to derive word embeddings as binary sparse vectors, or word
vector representations with high dimensionality, sparse representation and binary features (e.g.
composed only by ones and zeros). The proposed methodology tries to overcome some
disadvantages associated with state of the art approaches, namely the size of corpus needed for
training the model, while presenting comparable evaluations in several Natural Language
Processing tasks.
Results show that high dimensionality sparse binary vectors representations, obtained from a
very limited amount of training data, achieve comparable performances in similarity and
categorization intrinsic tasks, whereas in analogy tasks good results are obtained only for nouns
categories. Our embeddings outperformed eight state of the art word embeddings in word
similarity tasks, and two word embeddings in categorization tasks.A designação word embeddings refere-se a representações vetoriais das palavras que capturam
as similaridades semânticas e sintáticas entre estas. Palavras similares tendem a ser
representadas por vetores próximos num espaço N dimensional considerando, por exemplo, a
distância Euclidiana entre os pontos associados a estas representações vetoriais num espaço
vetorial contínuo. Esta propriedade, torna as word embeddings importantes em várias tarefas de
Processamento Natural da Língua, desde avaliações de analogia e similaridade entre palavras,
às mais complexas tarefas de categorização, sumarização e tradução automática de texto.
Tipicamente, as word embeddings são constituídas por vetores densos, de dimensionalidade
reduzida. São obtidas a partir de aprendizagem não supervisionada, recorrendo a consideráveis
quantidades de dados, através da otimização de uma função objetivo de uma rede neuronal.
Este trabalho propõe uma metodologia para obter word embeddings constituídas por vetores
binários esparsos, ou seja, representações vetoriais das palavras simultaneamente binárias (e.g.
compostas apenas por zeros e uns), esparsas e com elevada dimensionalidade. A metodologia
proposta tenta superar algumas desvantagens associadas às metodologias do estado da arte,
nomeadamente o elevado volume de dados necessário para treinar os modelos, e
simultaneamente apresentar resultados comparáveis em várias tarefas de Processamento
Natural da Língua.
Os resultados deste trabalho mostram que estas representações, obtidas a partir de uma
quantidade limitada de dados de treino, obtêm performances consideráveis em tarefas de
similaridade e categorização de palavras. Por outro lado, em tarefas de analogia de palavras
apenas se obtém resultados consideráveis para a categoria gramatical dos substantivos. As word
embeddings obtidas com a metodologia proposta, e comparando com o estado da arte,
superaram a performance de oito word embeddings em tarefas de similaridade, e de duas word
embeddings em tarefas de categorização de palavras
Short Text Categorization using World Knowledge
The content of the World Wide Web is drastically multiplying, and thus the amount of available online text data is increasing every day.
Today, many users contribute to this massive global network via online platforms by sharing information in the form of a short text. Such an immense amount of data covers subjects from all the existing domains (e.g., Sports, Economy, Biology, etc.). Further, manually processing such data is beyond human capabilities. As a result, Natural Language Processing (NLP) tasks, which aim to automatically analyze and process natural language documents have gained significant attention. Among these tasks, due to its application in various domains, text categorization has become one of the most fundamental and crucial tasks.
However, the standard text categorization models face major challenges while performing short text categorization, due to the unique characteristics of short texts, i.e., insufficient text length, sparsity, ambiguity, etc. In other words, the conventional approaches provide substandard performance, when they are directly applied to the short text categorization task. Furthermore, in the case of short text, the standard feature extraction techniques such as bag-of-words suffer from limited contextual information. Hence, it is essential to enhance the text representations with an external knowledge source. Moreover, the traditional models require a significant amount of manually labeled data and obtaining labeled data is a costly and time-consuming task. Therefore, although recently proposed supervised methods, especially, deep neural network approaches have demonstrated notable performance, the requirement of the labeled data remains the main bottleneck of these approaches.
In this thesis, we investigate the main research question of how to perform \textit{short text categorization} effectively \textit{without requiring any labeled data} using knowledge bases as an external source. In this regard, novel short text categorization models, namely, Knowledge-Based Short Text Categorization (KBSTC) and Weakly Supervised Short Text Categorization using World Knowledge (WESSTEC) have been introduced and evaluated in this thesis. The models do not require any hand-labeled data to perform short text categorization, instead, they leverage the semantic similarity between the short texts and the predefined categories. To quantify such semantic similarity, the low dimensional representation of entities and categories have been learned by exploiting a large knowledge base. To achieve that a novel entity and category embedding model has also been proposed in this thesis. The extensive experiments have been conducted to assess the performance of the proposed short text categorization models and the embedding model on several standard benchmark datasets
Discovering core terms for effective short text clustering
This thesis aims to address the current limitations in short texts clustering and provides a systematic framework that includes three novel methods to effectively measure similarity of two short texts, efficiently group short texts, and dynamically cluster short text streams
Semantic vector representations of senses, concepts and entities and their applications in natural language processing
Representation learning lies at the core of Artificial Intelligence (AI) and Natural Language Processing (NLP). Most recent research has focused on develop representations at the word level. In particular, the representation of words in a vector space has been viewed as one of the most important successes of lexical semantics and NLP in recent years. The generalization power and flexibility of these representations have enabled their integration into a wide variety of text-based applications, where they have proved extremely beneficial. However, these representations are hampered by an important limitation, as they are unable to model different meanings of the same word.
In order to deal with this issue, in this thesis we analyze and develop flexible semantic representations of meanings, i.e. senses, concepts and entities. This finer distinction enables us to model semantic information at a deeper level, which in turn is essential for dealing with ambiguity.
In addition, we view these (vector) representations as a connecting bridge between lexical resources and textual data, encoding knowledge from both sources. We argue that these sense-level representations, similarly to the importance of word embeddings, constitute a first step for seamlessly integrating explicit knowledge into NLP applications, while focusing on the deeper sense level. Its use does not only aim at solving the inherent lexical ambiguity of language, but also represents a first step to the integration of background knowledge into NLP applications. Multilinguality is another key feature of these representations, as we explore the construction language-independent and multilingual techniques that can be applied to arbitrary languages, and also across languages.
We propose simple unsupervised and supervised frameworks which make use of these vector representations for word sense disambiguation, a key application in natural language understanding, and other downstream applications such as text categorization and sentiment analysis. Given the nature of the vectors, we also investigate their effectiveness for improving and enriching knowledge bases, by reducing the sense granularity of their sense inventories and extending them with domain labels, hypernyms and collocations
- …