Log analysis is one of the main techniques engineers use to troubleshoot
faults of large-scale software systems. During the past decades, many log
analysis approaches have been proposed to detect system anomalies reflected by
logs. They usually take log event counts or sequential log events as inputs and
utilize machine learning algorithms including deep learning models to detect
system anomalies. These anomalies are often identified as violations of
quantitative relational patterns or sequential patterns of log events in log
sequences. However, existing methods fail to leverage the spatial structural
relationships among log events, resulting in potential false alarms and
unstable performance. In this study, we propose a novel graph-based log anomaly
detection method, LogGD, to effectively address the issue by transforming log
sequences into graphs. We exploit the powerful capability of Graph Transformer
Neural Network, which combines graph structure and node semantics for log-based
anomaly detection. We evaluate the proposed method on four widely-used public
log datasets. Experimental results show that LogGD can outperform
state-of-the-art quantitative-based and sequence-based methods and achieve
stable performance under different window size settings. The results confirm
that LogGD is effective in log-based anomaly detection.Comment: 12 pages, 12 figure