Neural networks often learn task-specific latent representations that fail to
generalize to novel settings or tasks. Conversely, humans learn discrete
representations (i.e., concepts or words) at a variety of abstraction levels
(e.g., "bird" vs. "sparrow") and deploy the appropriate abstraction based on
task. Inspired by this, we train neural models to generate a spectrum of
discrete representations, and control the complexity of the representations
(roughly, how many bits are allocated for encoding inputs) by tuning the
entropy of the distribution over representations. In finetuning experiments,
using only a small number of labeled examples for a new task, we show that (1)
tuning the representation to a task-appropriate complexity level supports the
highest finetuning performance, and (2) in a human-participant study, users
were able to identify the appropriate complexity level for a downstream task
using visualizations of discrete representations. Our results indicate a
promising direction for rapid model finetuning by leveraging human insight.Comment: NeurIPS 202