Binwise Variance Scaling (BVS) has recently been proposed as a post hoc
recalibration method for prediction uncertainties of machine learning
regression problems that is able of more efficient corrections than uniform
variance (or temperature) scaling. The original version of BVS uses
uncertainty-based binning, which is aimed to improve calibration conditionally
on uncertainty, i.e. consistency. I explore here several adaptations of BVS, in
particular with alternative loss functions and a binning scheme based on an
input-feature (X) in order to improve adaptivity, i.e. calibration conditional
on X. The performances of BVS and its proposed variants are tested on a
benchmark dataset for the prediction of atomization energies and compared to
the results of isotonic regression.Comment: This version corrects an error in the estimation of the Sx scores for
the test set, affecting Fig. 2 and Tables I-III of the initial version. The
main points of the discussion and the conclusions are unchange