752 research outputs found
Time Dynamic Topic Models
Information extraction from large corpora can be a useful tool for many applications in industry and academia. For instance, political communication science has just recently begun to use the opportunities that come with the availability of massive amounts of information available through the Internet and the computational tools that natural language processing can provide. We give a linguistically motivated interpretation of topic modeling, a state-of-the-art algorithm for extracting latent semantic sets of words from large text corpora, and extend this interpretation to cover issues and issue-cycles as theoretical constructs coming from political communication science. We build on a dynamic topic model, a model whose semantic sets of words are allowed to evolve over time governed by a Brownian motion stochastic process and apply a new form of analysis to its result. Generally this analysis is based on the notion of volatility as in the rate of change of stocks or derivatives known from econometrics. We claim that the rate of change of sets of semantically related words can be interpreted as issue-cycles, the word sets as describing the underlying issue. Generalizing over the existing work, we introduce dynamic topic models that are driven by general (Brownian motion is a special case of our model) Gaussian processes, a family of stochastic processes defined by the function that determines their covariance structure. We use the above assumption and apply a certain class of covariance functions to allow for an appropriate rate of change in word sets while preserving the semantic relatedness among words. Applying our findings to a large newspaper data set, the New York Times Annotated corpus (all articles between 1987 and 2007), we are able to identify sub-topics in time, \\\\textit{time-localized topics} and find patterns in their behavior over time. However, we have to drop the assumption of semantic relatedness over all available time for any one topic. Time-localized topics are consistent in themselves but do not necessarily share semantic meaning between each other. They can, however, be interpreted to capture the notion of issues and their behavior that of issue-cycles
Evaluating Dynamic Topic Models
There is a lack of quantitative measures to evaluate the progression of
topics through time in dynamic topic models (DTMs). Filling this gap, we
propose a novel evaluation measure for DTMs that analyzes the changes in the
quality of each topic over time. Additionally, we propose an extension
combining topic quality with the model's temporal consistency. We demonstrate
the utility of the proposed measure by applying it to synthetic data and data
from existing DTMs. We also conducted a human evaluation, which indicates that
the proposed measure correlates well with human judgment. Our findings may help
in identifying changing topics, evaluating different DTMs, and guiding future
research in this area
Online multiscale dynamic topic models
We propose an online topic model for sequentially analyzing the time evolution of topics in document collections. Topics naturally evolve with multiple timescales. For example, some words may be used consistently over one hundred years, while other words emerge and disappear over periods of a few days. Thus, in the proposed model, current topicspecific distributions over words are assumed to be generated based on the multiscale word distributions of the previous epoch. Considering both the long-timescale dependency as well as the short-timescale dependency yields a more robust model. We derive efficient online inference procedures based on a stochastic EM algorithm, in which the model is sequentially updated using newly obtained data; this means that past data are not required to make the inference. We demonstrate the effectiveness of the proposed method in terms of predictive performance and computational efficiency by examining collections of real documents with timestamps
ANTM: An Aligned Neural Topic Model for Exploring Evolving Topics
This paper presents an algorithmic family of dynamic topic models called
Aligned Neural Topic Models (ANTM), which combine novel data mining algorithms
to provide a modular framework for discovering evolving topics. ANTM maintains
the temporal continuity of evolving topics by extracting time-aware features
from documents using advanced pre-trained Large Language Models (LLMs) and
employing an overlapping sliding window algorithm for sequential document
clustering. This overlapping sliding window algorithm identifies a different
number of topics within each time frame and aligns semantically similar
document clusters across time periods. This process captures emerging and
fading trends across different periods and allows for a more interpretable
representation of evolving topics. Experiments on four distinct datasets show
that ANTM outperforms probabilistic dynamic topic models in terms of topic
coherence and diversity metrics. Moreover, it improves the scalability and
flexibility of dynamic topic models by being accessible and adaptable to
different types of algorithms. Additionally, a Python package is developed for
researchers and scientists who wish to study the trends and evolving patterns
of topics in large-scale textual data
- …