Meta-learning has arisen as a successful method for improving training
performance by training over many similar tasks, especially with deep neural
networks (DNNs). However, the theoretical understanding of when and why
overparameterized models such as DNNs can generalize well in meta-learning is
still limited. As an initial step towards addressing this challenge, this paper
studies the generalization performance of overfitted meta-learning under a
linear regression model with Gaussian features. In contrast to a few recent
studies along the same line, our framework allows the number of model
parameters to be arbitrarily larger than the number of features in the ground
truth signal, and hence naturally captures the overparameterized regime in
practical deep meta-learning. We show that the overfitted min β2β-norm
solution of model-agnostic meta-learning (MAML) can be beneficial, which is
similar to the recent remarkable findings on ``benign overfitting'' and
``double descent'' phenomenon in the classical (single-task) linear regression.
However, due to the uniqueness of meta-learning such as task-specific gradient
descent inner training and the diversity/fluctuation of the ground-truth
signals among training tasks, we find new and interesting properties that do
not exist in single-task linear regression. We first provide a high-probability
upper bound (under reasonable tightness) on the generalization error, where
certain terms decrease when the number of features increases. Our analysis
suggests that benign overfitting is more significant and easier to observe when
the noise and the diversity/fluctuation of the ground truth of each training
task are large. Under this circumstance, we show that the overfitted min
β2β-norm solution can achieve an even lower generalization error than the
underparameterized solution