Studies have shown that modern neural networks tend to be poorly calibrated
due to over-confident predictions. Traditionally, post-processing methods have
been used to calibrate the model after training. In recent years, various
trainable calibration measures have been proposed to incorporate them directly
into the training process. However, these methods all incorporate internal
hyperparameters, and the performance of these calibration objectives relies on
tuning these hyperparameters, incurring more computational costs as the size of
neural networks and datasets become larger. As such, we present Expected
Squared Difference (ESD), a tuning-free (i.e., hyperparameter-free) trainable
calibration objective loss, where we view the calibration error from the
perspective of the squared difference between the two expectations. With
extensive experiments on several architectures (CNNs, Transformers) and
datasets, we demonstrate that (1) incorporating ESD into the training improves
model calibration in various batch size settings without the need for internal
hyperparameter tuning, (2) ESD yields the best-calibrated results compared with
previous approaches, and (3) ESD drastically improves the computational costs
required for calibration during training due to the absence of internal
hyperparameter. The code is publicly accessible at
https://github.com/hee-suk-yoon/ESD.Comment: ICLR 202