Privacy-Preserving intrusion detection over network data

Abstract

Effective protection against cyber-attacks requires constant monitoring and analysis of system data such as log files and network packets in an IT infrastructure, which may contain sensitive information. To this end, security operation centers (SOC) are established to detect, analyze, and respond to cyber-security incidents. Security officers at SOC are not necessarily trusted with handling the content of the sensitive and private information, especially in case when SOC services are outsourced as maintaining in-house expertise and capability in cyber-security is expensive. Therefore, an end-to-end security solution is needed for the system data. SOC often utilizes detection models either for known types of attacks or for an anomaly and applies them to the collected data to detect cyber-security incidents. The models are usually constructed from historical data that contains records pertaining to attacks and normal functioning of the IT infrastructure under monitoring; e.g., using machine learning techniques. SOC is also motivated to keep its models confidential for three reasons: i) to capitalize on the models that are its propriety expertise, ii) to protect its detection strategies against adversarial machine learning, in which intelligent and adaptive adversaries carefully manipulate their attack strategy to avoid detection, and iii) the model might have been trained on sensitive information, whereby revealing the model can violate certain laws and regulations. Therefore, detection models are also private. In this dissertation, we propose a scenario in which privacy of both system data and detection models is protected and information leakage is either prevented altogether or quantifiably decreased. Our main approach is to provide an end-to-end encryption for system data and detection models utilizing lattice-based cryptography that allows homomorphic operations over the encrypted data. Assuming that the detection models are previously obtained from training data by SOC, we apply the models to system data homomorphically, whereby the model is encrypted. We take advantage of three different machine learning algorithms to extract intrusion models by training historical data. Using different data sets (two recent data sets, and one outdated but widely used in the intrusion detection literature), the performance of each algorithm is evaluated via the following metrics: i) the time that takes to extract the rules, ii) the time that takes to apply the rules on data homomorphically, iii) the accuracy of the rules in detecting intrusions, and iv) the number of rules. Our experiments demonstrates that the proposed privacy-preserving intrusion detection system (IDS) is feasible in terms of execution times and reliable in terms of accurac

    Similar works