49,954 research outputs found
Probing the topological properties of complex networks modeling short written texts
In recent years, graph theory has been widely employed to probe several
language properties. More specifically, the so-called word adjacency model has
been proven useful for tackling several practical problems, especially those
relying on textual stylistic analysis. The most common approach to treat texts
as networks has simply considered either large pieces of texts or entire books.
This approach has certainly worked well -- many informative discoveries have
been made this way -- but it raises an uncomfortable question: could there be
important topological patterns in small pieces of texts? To address this
problem, the topological properties of subtexts sampled from entire books was
probed. Statistical analyzes performed on a dataset comprising 50 novels
revealed that most of the traditional topological measurements are stable for
short subtexts. When the performance of the authorship recognition task was
analyzed, it was found that a proper sampling yields a discriminability similar
to the one found with full texts. Surprisingly, the support vector machine
classification based on the characterization of short texts outperformed the
one performed with entire books. These findings suggest that a local
topological analysis of large documents might improve its global
characterization. Most importantly, it was verified, as a proof of principle,
that short texts can be analyzed with the methods and concepts of complex
networks. As a consequence, the techniques described here can be extended in a
straightforward fashion to analyze texts as time-varying complex networks
Graph-Sparse LDA: A Topic Model with Structured Sparsity
Originally designed to model text, topic modeling has become a powerful tool
for uncovering latent structure in domains including medicine, finance, and
vision. The goals for the model vary depending on the application: in some
cases, the discovered topics may be used for prediction or some other
downstream task. In other cases, the content of the topic itself may be of
intrinsic scientific interest.
Unfortunately, even using modern sparse techniques, the discovered topics are
often difficult to interpret due to the high dimensionality of the underlying
space. To improve topic interpretability, we introduce Graph-Sparse LDA, a
hierarchical topic model that leverages knowledge of relationships between
words (e.g., as encoded by an ontology). In our model, topics are summarized by
a few latent concept-words from the underlying graph that explain the observed
words. Graph-Sparse LDA recovers sparse, interpretable summaries on two
real-world biomedical datasets while matching state-of-the-art prediction
performance
- …