35,839 research outputs found
Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural Networks
Physically informed neural networks (PINNs) are a promising emerging method
for solving differential equations. As in many other deep learning approaches,
the choice of PINN design and training protocol requires careful craftsmanship.
Here, we suggest a comprehensive theoretical framework that sheds light on this
important problem. Leveraging an equivalence between infinitely
over-parameterized neural networks and Gaussian process regression (GPR), we
derive an integro-differential equation that governs PINN prediction in the
large data-set limit -- the Neurally-Informed Equation (NIE). This equation
augments the original one by a kernel term reflecting architecture choices and
allows quantifying implicit bias induced by the network via a spectral
decomposition of the source term in the original differential equation
Self-supervised debiasing using low rank regularization
Spurious correlations can cause strong biases in deep neural networks,
impairing generalization ability. While most existing debiasing methods require
full supervision on either spurious attributes or target labels, training a
debiased model from a limited amount of both annotations is still an open
question. To address this issue, we investigate an interesting phenomenon using
the spectral analysis of latent representations: spuriously correlated
attributes make neural networks inductively biased towards encoding lower
effective rank representations. We also show that a rank regularization can
amplify this bias in a way that encourages highly correlated features.
Leveraging these findings, we propose a self-supervised debiasing framework
potentially compatible with unlabeled samples. Specifically, we first pretrain
a biased encoder in a self-supervised manner with the rank regularization,
serving as a semantic bottleneck to enforce the encoder to learn the spuriously
correlated attributes. This biased encoder is then used to discover and
upweight bias-conflicting samples in a downstream task, serving as a boosting
to effectively debias the main model. Remarkably, the proposed debiasing
framework significantly improves the generalization performance of
self-supervised learning baselines and, in some cases, even outperforms
state-of-the-art supervised debiasing approaches
- …