Meta-learning is critical for a variety of practical ML systems -- like
personalized recommendations systems -- that are required to generalize to new
tasks despite a small number of task-specific training points. Existing
meta-learning techniques use two complementary approaches of either learning a
low-dimensional representation of points for all tasks, or task-specific
fine-tuning of a global model trained using all the tasks. In this work, we
propose a novel meta-learning framework that combines both the techniques to
enable handling of a large number of data-starved tasks. Our framework models
network weights as a sum of low-rank and sparse matrices. This allows us to
capture information from multiple domains together in the low-rank part while
still allowing task specific personalization using the sparse part. We
instantiate and study the framework in the linear setting, where the problem
reduces to that of estimating the sum of a rank-r and a k-column sparse
matrix using a small number of linear measurements. We propose an alternating
minimization method with hard thresholding -- AMHT-LRS -- to learn the low-rank
and sparse part effectively and efficiently. For the realizable, Gaussian data
setting, we show that AMHT-LRS indeed solves the problem efficiently with
nearly optimal samples. We extend AMHT-LRS to ensure that it preserves privacy
of each individual user in the dataset, while still ensuring strong
generalization with nearly optimal number of samples. Finally, on multiple
datasets, we demonstrate that the framework allows personalized models to
obtain superior performance in the data-scarce regime.Comment: 97 pages, 3 figure