Equivariance w.r.t. geometric transformations in neural networks improves
data efficiency, parameter efficiency and robustness to out-of-domain
perspective shifts. When equivariance is not designed into a neural network,
the network can still learn equivariant functions from the data. We quantify
this learned equivariance, by proposing an improved measure for equivariance.
We find evidence for a correlation between learned translation equivariance and
validation accuracy on ImageNet. We therefore investigate what can increase the
learned equivariance in neural networks, and find that data augmentation,
reduced model capacity and inductive bias in the form of convolutions induce
higher learned equivariance in neural networks.Comment: Accepted at CVPR workshop L3D-IVU 202