15,700 research outputs found
Deep Multi-view Learning to Rank
We study the problem of learning to rank from multiple information sources.
Though multi-view learning and learning to rank have been studied extensively
leading to a wide range of applications, multi-view learning to rank as a
synergy of both topics has received little attention. The aim of the paper is
to propose a composite ranking method while keeping a close correlation with
the individual rankings simultaneously. We present a generic framework for
multi-view subspace learning to rank (MvSL2R), and two novel solutions are
introduced under the framework. The first solution captures information of
feature mappings from within each view as well as across views using
autoencoder-like networks. Novel feature embedding methods are formulated in
the optimization of multi-view unsupervised and discriminant autoencoders.
Moreover, we introduce an end-to-end solution to learning towards both the
joint ranking objective and the individual rankings. The proposed solution
enhances the joint ranking with minimum view-specific ranking loss, so that it
can achieve the maximum global view agreements in a single optimization
process. The proposed method is evaluated on three different ranking problems,
i.e. university ranking, multi-view lingual text ranking and image data
ranking, providing superior results compared to related methods.Comment: Published at IEEE TKD
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
We investigate the task of building open domain, conversational dialogue
systems based on large dialogue corpora using generative models. Generative
models produce system responses that are autonomously generated word-by-word,
opening up the possibility for realistic, flexible interactions. In support of
this goal, we extend the recently proposed hierarchical recurrent
encoder-decoder neural network to the dialogue domain, and demonstrate that
this model is competitive with state-of-the-art neural language models and
back-off n-gram models. We investigate the limitations of this and similar
approaches, and show how its performance can be improved by bootstrapping the
learning from a larger question-answer pair corpus and from pretrained word
embeddings.Comment: 8 pages with references; Published in AAAI 2016 (Special Track on
Cognitive Systems
- …