107,157 research outputs found
Recommended from our members
Learning Distributed Representations for Multiple-Viewpoint Melodic Prediction
The analysis of sequences is important for extracting in- formation from music owing to its fundamentally temporal nature. In this paper, we present a distributed model based on the Restricted Boltzmann Machine (RBM) for learning melodic sequences. The model is similar to a previous suc- cessful neural network model for natural language [2]. It is first trained to predict the next pitch in a given pitch se- quence, and then extended to also make use of information in sequences of note-durations in monophonic melodies on the same task. In doing so, we also propose an efficient way of representing this additional information that takes advantage of the RBM’s structure. Results show that this RBM-based prediction model performs better than previ- ously evaluated n-gram models and also outperforms them in certain cases. It is able to make use of information present in longer sequences more effectively than n-gram models, while scaling linearly in the number of free pa- rameters required
Fast, Small and Exact: Infinite-order Language Modelling with Compressed Suffix Trees
Efficient methods for storing and querying are critical for scaling
high-order n-gram language models to large corpora. We propose a language model
based on compressed suffix trees, a representation that is highly compact and
can be easily held in memory, while supporting queries needed in computing
language model probabilities on-the-fly. We present several optimisations which
improve query runtimes up to 2500x, despite only incurring a modest increase in
construction time and memory usage. For large corpora and high Markov orders,
our method is highly competitive with the state-of-the-art KenLM package. It
imposes much lower memory requirements, often by orders of magnitude, and has
runtimes that are either similar (for training) or comparable (for querying).Comment: 14 pages in Transactions of the Association for Computational
Linguistics (TACL) 201
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
We investigate the task of building open domain, conversational dialogue
systems based on large dialogue corpora using generative models. Generative
models produce system responses that are autonomously generated word-by-word,
opening up the possibility for realistic, flexible interactions. In support of
this goal, we extend the recently proposed hierarchical recurrent
encoder-decoder neural network to the dialogue domain, and demonstrate that
this model is competitive with state-of-the-art neural language models and
back-off n-gram models. We investigate the limitations of this and similar
approaches, and show how its performance can be improved by bootstrapping the
learning from a larger question-answer pair corpus and from pretrained word
embeddings.Comment: 8 pages with references; Published in AAAI 2016 (Special Track on
Cognitive Systems
- …