Recent work has revealed many intriguing empirical phenomena in neural
network training, despite the poorly understood and highly complex loss
landscapes and training dynamics. One of these phenomena, Linear Mode
Connectivity (LMC), has gained considerable attention due to the intriguing
observation that different solutions can be connected by a linear path in the
parameter space while maintaining near-constant training and test losses. In
this work, we introduce a stronger notion of linear connectivity, Layerwise
Linear Feature Connectivity (LLFC), which says that the feature maps of every
layer in different trained networks are also linearly connected. We provide
comprehensive empirical evidence for LLFC across a wide range of settings,
demonstrating that whenever two trained networks satisfy LMC (via either
spawning or permutation methods), they also satisfy LLFC in nearly all the
layers. Furthermore, we delve deeper into the underlying factors contributing
to LLFC, which reveal new insights into the spawning and permutation
approaches. The study of LLFC transcends and advances our understanding of LMC
by adopting a feature-learning perspective.Comment: 25 pages, 23 figure