Zero-shot inference is a powerful paradigm that enables the use of large
pretrained models for downstream classification tasks without further training.
However, these models are vulnerable to inherited biases that can impact their
performance. The traditional solution is fine-tuning, but this undermines the
key advantage of pretrained models, which is their ability to be used
out-of-the-box. We propose RoboShot, a method that improves the robustness of
pretrained model embeddings in a fully zero-shot fashion. First, we use
zero-shot language models (LMs) to obtain useful insights from task
descriptions. These insights are embedded and used to remove harmful and boost
useful components in embeddings -- without any supervision. Theoretically, we
provide a simple and tractable model for biases in zero-shot embeddings and
give a result characterizing under what conditions our approach can boost
performance. Empirically, we evaluate RoboShot on nine image and NLP
classification tasks and show an average improvement of 15.98% over several
zero-shot baselines. Additionally, we demonstrate that RoboShot is compatible
with a variety of pretrained and language models