The estimation of the generalization error of classifiers often relies on a
validation set. Such a set is hardly available in few-shot learning scenarios,
a highly disregarded shortcoming in the field. In these scenarios, it is common
to rely on features extracted from pre-trained neural networks combined with
distance-based classifiers such as nearest class mean. In this work, we
introduce a Gaussian model of the feature distribution. By estimating the
parameters of this model, we are able to predict the generalization error on
new classification tasks with few samples. We observe that accurate distance
estimates between class-conditional densities are the key to accurate estimates
of the generalization performance. Therefore, we propose an unbiased estimator
for these distances and integrate it in our numerical analysis. We show that
our approach outperforms alternatives such as the leave-one-out
cross-validation strategy in few-shot settings