5 research outputs found
Just Add Functions: A Neural-Symbolic Language Model
Neural network language models (NNLMs) have achieved ever-improving accuracy
due to more sophisticated architectures and increasing amounts of training
data. However, the inductive bias of these models (formed by the distributional
hypothesis of language), while ideally suited to modeling most running text,
results in key limitations for today's models. In particular, the models often
struggle to learn certain spatial, temporal, or quantitative relationships,
which are commonplace in text and are second-nature for human readers. Yet, in
many cases, these relationships can be encoded with simple mathematical or
logical expressions. How can we augment today's neural models with such
encodings?
In this paper, we propose a general methodology to enhance the inductive bias
of NNLMs by incorporating simple functions into a neural architecture to form a
hierarchical neural-symbolic language model (NSLM). These functions explicitly
encode symbolic deterministic relationships to form probability distributions
over words. We explore the effectiveness of this approach on numbers and
geographic locations, and show that NSLMs significantly reduce perplexity in
small-corpus language modeling, and that the performance improvement persists
for rare tokens even on much larger corpora. The approach is simple and
general, and we discuss how it can be applied to other word classes beyond
numbers and geography.Comment: Preprint of paper accepted for AAAI-202
MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities
In this paper, we introduce the MLM (Multiple Languages and Modalities)
dataset - a new resource to train and evaluate multitask systems on samples in
multiple modalities and three languages. The generation process and inclusion
of semantic data provide a resource that further tests the ability for
multitask systems to learn relationships between entities. The dataset is
designed for researchers and developers who build applications that perform
multiple tasks on data encountered on the web and in digital archives. A second
version of MLM provides a geo-representative subset of the data with weighted
samples for countries of the European Union. We demonstrate the value of the
resource in developing novel applications in the digital humanities with a
motivating use case and specify a benchmark set of tasks to retrieve modalities
and locate entities in the dataset. Evaluation of baseline multitask and single
task systems on the full and geo-representative versions of MLM demonstrate the
challenges of generalising on diverse data. In addition to the digital
humanities, we expect the resource to contribute to research in multimodal
representation learning, location estimation, and scene understanding