We introduce the framework of "social learning" in the context of large
language models (LLMs), whereby models share knowledge with each other in a
privacy-aware manner using natural language. We present and evaluate two
approaches for knowledge transfer between LLMs. In the first scenario, we allow
the model to generate abstract prompts aiming to teach the task. In our second
approach, models transfer knowledge by generating synthetic examples. We
evaluate these methods across diverse datasets and quantify memorization as a
proxy for privacy loss. These techniques inspired by social learning yield
promising results with low memorization of the original data. In particular, we
show that performance using these methods is comparable to results with the use
of original labels and prompts. Our work demonstrates the viability of social
learning for LLMs, establishes baseline approaches and highlights several
unexplored areas for future work