Instruction tuning large language models (LLMs) remains a challenging task,
owing to the complexity of hyperparameter selection and the difficulty involved
in evaluating the tuned models. To determine the optimal hyperparameters, an
automatic, robust, and reliable evaluation benchmark is essential. However,
establishing such a benchmark is not a trivial task due to the challenges
associated with evaluation accuracy and privacy protection. In response to
these challenges, we introduce a judge large language model, named PandaLM,
which is trained to distinguish the superior model given several LLMs.
PandaLM's focus extends beyond just the objective correctness of responses,
which is the main focus of traditional evaluation datasets. It addresses vital
subjective factors such as relative conciseness, clarity, adherence to
instructions, comprehensiveness, and formality. To ensure the reliability of
PandaLM, we collect a diverse human-annotated test dataset, where all contexts
are generated by humans and labels are aligned with human preferences. Our
results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation
ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM
enables the evaluation of LLM to be fairer but with less cost, evidenced by
significant improvements achieved by models tuned through PandaLM compared to
their counterparts trained with default Alpaca's hyperparameters. In addition,
PandaLM does not depend on API-based evaluations, thus avoiding potential data
leakage. All resources of PandaLM are released at
https://github.com/WeOpenML/PandaLM