System log detection model based on conformal prediction

Abstract

With the rapid development of the Internet of Things, the combination of the Internet of Things with machine learning, Hadoop and other fields are current development trends. Hadoop Distributed File System (HDFS) is one of the core components of Hadoop, which is used to process files that are divided into data blocks distributed in the cluster. Once the distributed log data are abnormal, it will cause serious losses. When using machine learning algorithms for system log anomaly detection, the output of threshold‐based classification models are only normal or abnormal simple predictions. This paper used the statistical learning method of conformity measure to calculate the similarity between test data and past experience. Compared with detection methods based on static threshold, the statistical learning method of the conformity measure can dynamically adapt to the changing log data. By adjusting the maximum fault tolerance, a system administrator can better manage and monitor the system logs. In addition, the computational efficiency of the statistical learning method for conformity measurement was improved. This paper implemented an intranet anomaly detection model based on log analysis, and conducted trial detection on HDFS data sets quickly and efficiently.This research was funded by the Guangdong Province Key Area R&D Program of China under Grant No. 2019B010137004; the National Natural Science Foundation of China under Grant No.61871140, No. U1636215, and No. 61972108; the National Key Research and Development Plan under Grant No. 2018YFB0803504; Civil Aviation Safety Capacity Building Project; and Guangdong Province Universities and Colleges Pearl River Scholar Funded Scheme (2019)

    Similar works