728 research outputs found
Online but Accurate Inference for Latent Variable Models with Local Gibbs Sampling
We study parameter inference in large-scale latent variable models. We first
propose an unified treatment of online inference for latent variable models
from a non-canonical exponential family, and draw explicit links between
several previously proposed frequentist or Bayesian methods. We then propose a
novel inference method for the frequentist estimation of parameters, that
adapts MCMC methods to online inference of latent variable models with the
proper use of local Gibbs sampling. Then, for latent Dirich-let allocation,we
provide an extensive set of experiments and comparisons with existing work,
where our new approach outperforms all previously proposed methods. In
particular, using Gibbs sampling for latent variable inference is superior to
variational inference in terms of test log-likelihoods. Moreover, Bayesian
inference through variational methods perform poorly, sometimes leading to
worse fits with latent variables of higher dimensionality
Approximate Decentralized Bayesian Inference
This paper presents an approximate method for performing Bayesian inference
in models with conditional independence over a decentralized network of
learning agents. The method first employs variational inference on each
individual learning agent to generate a local approximate posterior, the agents
transmit their local posteriors to other agents in the network, and finally
each agent combines its set of received local posteriors. The key insight in
this work is that, for many Bayesian models, approximate inference schemes
destroy symmetry and dependencies in the model that are crucial to the correct
application of Bayes' rule when combining the local posteriors. The proposed
method addresses this issue by including an additional optimization step in the
combination procedure that accounts for these broken dependencies. Experiments
on synthetic and real data demonstrate that the decentralized method provides
advantages in computational performance and predictive test likelihood over
previous batch and distributed methods.Comment: This paper was presented at UAI 2014. Please use the following BibTeX
citation: @inproceedings{Campbell14_UAI, Author = {Trevor Campbell and
Jonathan P. How}, Title = {Approximate Decentralized Bayesian Inference},
Booktitle = {Uncertainty in Artificial Intelligence (UAI)}, Year = {2014}
Discovering conversational topics and emotions associated with Demonetization tweets in India
Social media platforms contain great wealth of information which provides us
opportunities explore hidden patterns or unknown correlations, and understand
people's satisfaction with what they are discussing. As one showcase, in this
paper, we summarize the data set of Twitter messages related to recent
demonetization of all Rs. 500 and Rs. 1000 notes in India and explore insights
from Twitter's data. Our proposed system automatically extracts the popular
latent topics in conversations regarding demonetization discussed in Twitter
via the Latent Dirichlet Allocation (LDA) based topic model and also identifies
the correlated topics across different categories. Additionally, it also
discovers people's opinions expressed through their tweets related to the event
under consideration via the emotion analyzer. The system also employs an
intuitive and informative visualization to show the uncovered insight.
Furthermore, we use an evaluation measure, Normalized Mutual Information (NMI),
to select the best LDA models. The obtained LDA results show that the tool can
be effectively used to extract discussion topics and summarize them for further
manual analysis.Comment: 6 pages, 11 figures. arXiv admin note: substantial text overlap with
arXiv:1608.02519 by other authors; text overlap with arXiv:1705.08094 by
other author
- …