1,554 research outputs found
Syntactic Topic Models
The syntactic topic model (STM) is a Bayesian nonparametric model of language
that discovers latent distributions of words (topics) that are both
semantically and syntactically coherent. The STM models dependency parsed
corpora where sentences are grouped into documents. It assumes that each word
is drawn from a latent topic chosen by combining document-level features and
the local syntactic context. Each document has a distribution over latent
topics, as in topic models, which provides the semantic consistency. Each
element in the dependency parse tree also has a distribution over the topics of
its children, as in latent-state syntax models, which provides the syntactic
consistency. These distributions are convolved so that the topic of each word
is likely under both its document and syntactic context. We derive a fast
posterior inference algorithm based on variational methods. We report
qualitative and quantitative studies on both synthetic data and hand-parsed
documents. We show that the STM is a more predictive model of language than
current models based only on syntax or only on topics
A survey on Bayesian nonparametric learning
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning's great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure-from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications in the real world as examples to motivate future studies
Double Articulation Analyzer with Prosody for Unsupervised Word and Phoneme Discovery
Infants acquire words and phonemes from unsegmented speech signals using
segmentation cues, such as distributional, prosodic, and co-occurrence cues.
Many pre-existing computational models that represent the process tend to focus
on distributional or prosodic cues. This paper proposes a nonparametric
Bayesian probabilistic generative model called the prosodic hierarchical
Dirichlet process-hidden language model (Prosodic HDP-HLM). Prosodic HDP-HLM,
an extension of HDP-HLM, considers both prosodic and distributional cues within
a single integrative generative model. We conducted three experiments on
different types of datasets, and demonstrate the validity of the proposed
method. The results show that the Prosodic DAA successfully uses prosodic cues
and outperforms a method that solely uses distributional cues. The main
contributions of this study are as follows: 1) We develop a probabilistic
generative model for time series data including prosody that potentially has a
double articulation structure; 2) We propose the Prosodic DAA by deriving the
inference procedure for Prosodic HDP-HLM and show that Prosodic DAA can
discover words directly from continuous human speech signals using statistical
information and prosodic information in an unsupervised manner; 3) We show that
prosodic cues contribute to word segmentation more in naturally distributed
case words, i.e., they follow Zipf's law.Comment: 11 pages, Submitted to IEEE Transactions on Cognitive and
Developmental System
- …