Automated log analysis is crucial in modern software-intensive systems for
ensuring reliability and resilience throughout software maintenance and
engineering life cycles. Existing methods perform tasks such as log parsing and
log anomaly detection by providing a single prediction value without
interpretation. However, given the increasing volume of system events, the
limited interpretability of analysis results hinders analysts' trust and their
ability to take appropriate actions. Moreover, these methods require
substantial in-domain training data, and their performance declines sharply (by
up to 62.5%) in online scenarios involving unseen logs from new domains, a
common occurrence due to rapid software updates. In this paper, we propose
LogPrompt, a novel zero-shot and interpretable log analysis approach. LogPrompt
employs large language models (LLMs) to perform zero-shot log analysis tasks
via a suite of advanced prompt strategies tailored for log tasks, which
enhances LLMs' performance by up to 107.5% compared with simple prompts.
Experiments on nine publicly available evaluation datasets across two tasks
demonstrate that LogPrompt, despite using no training data, outperforms
existing approaches trained on thousands of logs by up to around 50%. We also
conduct a human evaluation of LogPrompt's interpretability, with six
practitioners possessing over 10 years of experience, who highly rated the
generated content in terms of usefulness and readability (averagely 4.42/5).
LogPrompt also exhibits remarkable compatibility with open-source and
smaller-scale LLMs, making it flexible for practical deployment