High-risk domains pose unique challenges that require language models to
provide accurate and safe responses. Despite the great success of large
language models (LLMs), such as ChatGPT and its variants, their performance in
high-risk domains remains unclear. Our study delves into an in-depth analysis
of the performance of instruction-tuned LLMs, focusing on factual accuracy and
safety adherence. To comprehensively assess the capabilities of LLMs, we
conduct experiments on six NLP datasets including question answering and
summarization tasks within two high-risk domains: legal and medical. Further
qualitative analysis highlights the existing limitations inherent in current
LLMs when evaluating in high-risk domains. This underscores the essential
nature of not only improving LLM capabilities but also prioritizing the
refinement of domain-specific metrics, and embracing a more human-centric
approach to enhance safety and factual reliability. Our findings advance the
field toward the concerns of properly evaluating LLMs in high-risk domains,
aiming to steer the adaptability of LLMs in fulfilling societal obligations and
aligning with forthcoming regulations, such as the EU AI Act.Comment: EMNLP 2023 Workshop on Benchmarking Generalisation in NLP (GenBench