Recent works have suggested that finite Bayesian neural networks may
sometimes outperform their infinite cousins because finite networks can
flexibly adapt their internal representations. However, our theoretical
understanding of how the learned hidden layer representations of finite
networks differ from the fixed representations of infinite networks remains
incomplete. Perturbative finite-width corrections to the network prior and
posterior have been studied, but the asymptotics of learned features have not
been fully characterized. Here, we argue that the leading finite-width
corrections to the average feature kernels for any Bayesian network with linear
readout and Gaussian likelihood have a largely universal form. We illustrate
this explicitly for three tractable network architectures: deep linear
fully-connected and convolutional networks, and networks with a single
nonlinear hidden layer. Our results begin to elucidate how task-relevant
learning signals shape the hidden layer representations of wide Bayesian neural
networks.Comment: 13+28 pages, 4 figures; v3: extensive revision with improved
exposition and new section on CNNs, accepted to NeurIPS 2021; v4: minor
updates to supplemen