1 research outputs found
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
Recent work has shown that fine-tuning large pre-trained language models on a
collection of tasks described via instructions, a.k.a. instruction-tuning,
improves their zero and few-shot generalization to unseen tasks. However, there
is a limited understanding of the performance trade-offs of different decisions
made during the instruction-tuning process. These decisions include the scale
and diversity of the instruction-tuning benchmark, different task sampling
strategies, fine-tuning with and without demonstrations, training using
specialized datasets for reasoning and dialogue, and finally, the fine-tuning
objectives themselves. In this paper, we characterize the effect of
instruction-tuning decisions on downstream task performance when scaling both
model and benchmark sizes. To this end, we create OPT-IML Bench: a large
benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated
into task categories from 8 existing benchmarks, and prepare an evaluation
framework to measure three types of model generalizations: to tasks from fully
held-out categories, to held-out tasks from seen categories, and to held-out
instances from seen tasks. Through the lens of this framework, we first present
insights about instruction-tuning decisions as applied to OPT-30B and further
exploit these insights to train OPT-IML 30B and 175B, which are
instruction-tuned versions of OPT. OPT-IML demonstrates all three
generalization abilities at both scales on four different evaluation benchmarks
with diverse tasks and input formats -- PromptSource, FLAN,
Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly
outperform OPT on all benchmarks but is also highly competitive with existing
models fine-tuned on each specific benchmark. We release OPT-IML at both
scales, together with the OPT-IML Bench evaluation framework.Comment: 55 page