1,494 research outputs found
From Word to Sense Embeddings: A Survey on Vector Representations of Meaning
Over the past years, distributed semantic representations have proved to be
effective and flexible keepers of prior knowledge to be integrated into
downstream applications. This survey focuses on the representation of meaning.
We start from the theoretical background behind word vector space models and
highlight one of their major limitations: the meaning conflation deficiency,
which arises from representing a word with all its possible meanings as a
single vector. Then, we explain how this deficiency can be addressed through a
transition from the word level to the more fine-grained level of word senses
(in its broader acceptation) as a method for modelling unambiguous lexical
meaning. We present a comprehensive overview of the wide range of techniques in
the two main branches of sense representation, i.e., unsupervised and
knowledge-based. Finally, this survey covers the main evaluation procedures and
applications for this type of representation, and provides an analysis of four
of its important aspects: interpretability, sense granularity, adaptability to
different domains and compositionality.Comment: 46 pages, 8 figures. Published in Journal of Artificial Intelligence
Researc
No Pattern, No Recognition: a Survey about Reproducibility and Distortion Issues of Text Clustering and Topic Modeling
Extracting knowledge from unlabeled texts using machine learning algorithms
can be complex. Document categorization and information retrieval are two
applications that may benefit from unsupervised learning (e.g., text clustering
and topic modeling), including exploratory data analysis. However, the
unsupervised learning paradigm poses reproducibility issues. The initialization
can lead to variability depending on the machine learning algorithm.
Furthermore, the distortions can be misleading when regarding cluster geometry.
Amongst the causes, the presence of outliers and anomalies can be a determining
factor. Despite the relevance of initialization and outlier issues for text
clustering and topic modeling, the authors did not find an in-depth analysis of
them. This survey provides a systematic literature review (2011-2022) of these
subareas and proposes a common terminology since similar procedures have
different terms. The authors describe research opportunities, trends, and open
issues. The appendices summarize the theoretical background of the text
vectorization, the factorization, and the clustering algorithms that are
directly or indirectly related to the reviewed works
A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining
User-generated content from social media is produced in many languages,
making it technically challenging to compare the discussed themes from one
domain across different cultures and regions. It is relevant for domains in a
globalized world, such as market research, where people from two nations and
markets might have different requirements for a product. We propose a simple,
modern, and effective method for building a single topic model with sentiment
analysis capable of covering multiple languages simultanteously, based on a
pre-trained state-of-the-art deep neural network for natural language
understanding. To demonstrate its feasibility, we apply the model to newspaper
articles and user comments of a specific domain, i.e., organic food products
and related consumption behavior. The themes match across languages.
Additionally, we obtain an high proportion of stable and domain-relevant
topics, a meaningful relation between topics and their respective textual
contents, and an interpretable representation for social media documents.
Marketing can potentially benefit from our method, since it provides an
easy-to-use means of addressing specific customer interests from different
market regions around the globe. For reproducibility, we provide the code,
data, and results of our study.Comment: 10 pages, 2 tables, 5 figures, full paper, peer-reviewed, published
at KDIR/IC3k 2021 conferenc
Topic Modelling Meets Deep Neural Networks: A Survey
Topic modelling has been a successful technique for text analysis for almost
twenty years. When topic modelling met deep neural networks, there emerged a
new and increasingly popular research area, neural topic models, with over a
hundred models developed and a wide range of applications in neural language
understanding such as text generation, summarisation and language models. There
is a need to summarise research developments and discuss open problems and
future directions. In this paper, we provide a focused yet comprehensive
overview of neural topic models for interested researchers in the AI community,
so as to facilitate them to navigate and innovate in this fast-growing research
area. To the best of our knowledge, ours is the first review focusing on this
specific topic.Comment: A review on Neural Topic Model
G2T: A simple but versatile framework for topic modeling based on pretrained language model and community detection
It has been reported that clustering-based topic models, which cluster
high-quality sentence embeddings with an appropriate word selection method, can
generate better topics than generative probabilistic topic models. However,
these approaches suffer from the inability to select appropriate parameters and
incomplete models that overlook the quantitative relation between words with
topics and topics with text. To solve these issues, we propose graph to topic
(G2T), a simple but effective framework for topic modelling. The framework is
composed of four modules. First, document representation is acquired using
pretrained language models. Second, a semantic graph is constructed according
to the similarity between document representations. Third, communities in
document semantic graphs are identified, and the relationship between topics
and documents is quantified accordingly. Fourth, the word--topic distribution
is computed based on a variant of TFIDF. Automatic evaluation suggests that G2T
achieved state-of-the-art performance on both English and Chinese documents
with different lengths. Human judgements demonstrate that G2T can produce
topics with better interpretability and coverage than baselines. In addition,
G2T can not only determine the topic number automatically but also give the
probabilistic distribution of words in topics and topics in documents. Finally,
G2T is publicly available, and the distillation experiments provide instruction
on how it works
- …