984 research outputs found
Joint Topic-Semantic-aware Social Recommendation for Online Voting
Online voting is an emerging feature in social networks, in which users can
express their attitudes toward various issues and show their unique interest.
Online voting imposes new challenges on recommendation, because the propagation
of votings heavily depends on the structure of social networks as well as the
content of votings. In this paper, we investigate how to utilize these two
factors in a comprehensive manner when doing voting recommendation. First, due
to the fact that existing text mining methods such as topic model and semantic
model cannot well process the content of votings that is typically short and
ambiguous, we propose a novel Topic-Enhanced Word Embedding (TEWE) method to
learn word and document representation by jointly considering their topics and
semantics. Then we propose our Joint Topic-Semantic-aware social Matrix
Factorization (JTS-MF) model for voting recommendation. JTS-MF model calculates
similarity among users and votings by combining their TEWE representation and
structural information of social networks, and preserves this
topic-semantic-social similarity during matrix factorization. To evaluate the
performance of TEWE representation and JTS-MF model, we conduct extensive
experiments on real online voting dataset. The results prove the efficacy of
our approach against several state-of-the-art baselines.Comment: The 26th ACM International Conference on Information and Knowledge
Management (CIKM 2017
Efficient Correlated Topic Modeling with Topic Embedding
Correlated topic modeling has been limited to small model and problem sizes
due to their high computational cost and poor scaling. In this paper, we
propose a new model which learns compact topic embeddings and captures topic
correlations through the closeness between the topic vectors. Our method
enables efficient inference in the low-dimensional embedding space, reducing
previous cubic or quadratic time complexity to linear w.r.t the topic size. We
further speedup variational inference with a fast sampler to exploit sparsity
of topic occurrence. Extensive experiments show that our approach is capable of
handling model and data scales which are several orders of magnitude larger
than existing correlation results, without sacrificing modeling quality by
providing competitive or superior performance in document classification and
retrieval.Comment: KDD 2017 oral. The first two authors contributed equall
Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses
Automatically evaluating the quality of dialogue responses for unstructured
domains is a challenging problem. Unfortunately, existing automatic evaluation
metrics are biased and correlate very poorly with human judgements of response
quality. Yet having an accurate automatic evaluation procedure is crucial for
dialogue research, as it allows rapid prototyping and testing of new models
with fewer expensive human evaluations. In response to this challenge, we
formulate automatic dialogue evaluation as a learning problem. We present an
evaluation model (ADEM) that learns to predict human-like scores to input
responses, using a new dataset of human response scores. We show that the ADEM
model's predictions correlate significantly, and at a level much higher than
word-overlap metrics such as BLEU, with human judgements at both the utterance
and system-level. We also show that ADEM can generalize to evaluating dialogue
models unseen during training, an important step for automatic dialogue
evaluation.Comment: ACL 201
A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression
Many machine learning problems can be formulated as predicting labels for a
pair of objects. Problems of that kind are often referred to as pairwise
learning, dyadic prediction or network inference problems. During the last
decade kernel methods have played a dominant role in pairwise learning. They
still obtain a state-of-the-art predictive performance, but a theoretical
analysis of their behavior has been underexplored in the machine learning
literature.
In this work we review and unify existing kernel-based algorithms that are
commonly used in different pairwise learning settings, ranging from matrix
filtering to zero-shot learning. To this end, we focus on closed-form efficient
instantiations of Kronecker kernel ridge regression. We show that independent
task kernel ridge regression, two-step kernel ridge regression and a linear
matrix filter arise naturally as a special case of Kronecker kernel ridge
regression, implying that all these methods implicitly minimize a squared loss.
In addition, we analyze universality, consistency and spectral filtering
properties. Our theoretical results provide valuable insights in assessing the
advantages and limitations of existing pairwise learning methods.Comment: arXiv admin note: text overlap with arXiv:1606.0427
- …