Despite the tremendous success, existing machine learning models still fall
short of human-like systematic generalization -- learning compositional rules
from limited data and applying them to unseen combinations in various domains.
We propose Neural-Symbolic Recursive Machine (NSR) to tackle this deficiency.
The core representation of NSR is a Grounded Symbol System (GSS) with
combinatorial syntax and semantics, which entirely emerges from training data.
Akin to the neuroscience studies suggesting separate brain systems for
perceptual, syntactic, and semantic processing, NSR implements analogous
separate modules of neural perception, syntactic parsing, and semantic
reasoning, which are jointly learned by a deduction-abduction algorithm. We
prove that NSR is expressive enough to model various sequence-to-sequence
tasks. Superior systematic generalization is achieved via the inductive biases
of equivariance and recursiveness embedded in NSR. In experiments, NSR achieves
state-of-the-art performance in three benchmarks from different domains: SCAN
for semantic parsing, PCFG for string manipulation, and HINT for arithmetic
reasoning. Specifically, NSR achieves 100% generalization accuracy on SCAN and
PCFG and outperforms state-of-the-art models on HINT by about 23%. Our NSR
demonstrates stronger generalization than pure neural networks due to its
symbolic representation and inductive biases. NSR also demonstrates better
transferability than existing neural-symbolic approaches due to less
domain-specific knowledge required