Conventional scaling of neural networks typically involves designing a base
network and growing different dimensions like width, depth, etc. of the same by
some predefined scaling factors. We introduce an automated scaling approach
leveraging second-order loss landscape information. Our method is flexible
towards skip connections a mainstay in modern vision transformers. Our
training-aware method jointly scales and trains transformers without additional
training iterations. Motivated by the hypothesis that not all neurons need
uniform depth complexity, our approach embraces depth heterogeneity. Extensive
evaluations on DeiT-S with ImageNet100 show a 2.5% accuracy gain and 10%
parameter efficiency improvement over conventional scaling. Scaled networks
demonstrate superior performance upon training small scale datasets from
scratch. We introduce the first intact scaling mechanism for vision
transformers, a step towards efficient model scaling.Comment: Accepted At ICLR 2024 (Tiny Paper Track