6 research outputs found
Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
A recent strategy to circumvent the exploding and vanishing gradient problem
in RNNs, and to allow the stable propagation of signals over long time scales,
is to constrain recurrent connectivity matrices to be orthogonal or unitary.
This ensures eigenvalues with unit norm and thus stable dynamics and training.
However this comes at the cost of reduced expressivity due to the limited
variety of orthogonal transformations. We propose a novel connectivity
structure based on the Schur decomposition and a splitting of the Schur form
into normal and non-normal parts. This allows to parametrize matrices with
unit-norm eigenspectra without orthogonality constraints on eigenbases. The
resulting architecture ensures access to a larger space of spectrally
constrained matrices, of which orthogonal matrices are a subset. This crucial
difference retains the stability advantages and training speed of orthogonal
RNNs while enhancing expressivity, especially on tasks that require
computations over ongoing input sequences
On Neural Architecture Inductive Biases for Relational Tasks
Current deep learning approaches have shown good in-distribution
generalization performance, but struggle with out-of-distribution
generalization. This is especially true in the case of tasks involving abstract
relations like recognizing rules in sequences, as we find in many intelligence
tests. Recent work has explored how forcing relational representations to
remain distinct from sensory representations, as it seems to be the case in the
brain, can help artificial systems. Building on this work, we further explore
and formalize the advantages afforded by 'partitioned' representations of
relations and sensory details, and how this inductive bias can help recompose
learned relational structure in newly encountered settings. We introduce a
simple architecture based on similarity scores which we name Compositional
Relational Network (CoRelNet). Using this model, we investigate a series of
inductive biases that ensure abstract relations are learned and represented
distinctly from sensory data, and explore their effects on out-of-distribution
generalization for a series of relational psychophysics tasks. We find that
simple architectural choices can outperform existing models in
out-of-distribution generalization. Together, these results show that
partitioning relational representations from other information streams may be a
simple way to augment existing network architectures' robustness when
performing out-of-distribution relational computations