7 research outputs found

    No Need to Know Physics: Resilience of Process-based Model-free Anomaly Detection for Industrial Control Systems

    Full text link
    In recent years, a number of process-based anomaly detection schemes for Industrial Control Systems were proposed. In this work, we provide the first systematic analysis of such schemes, and introduce a taxonomy of properties that are verified by those detection systems. We then present a novel general framework to generate adversarial spoofing signals that violate physical properties of the system, and use the framework to analyze four anomaly detectors published at top security conferences. We find that three of those detectors are susceptible to a number of adversarial manipulations (e.g., spoofing with precomputed patterns), which we call Synthetic Sensor Spoofing and one is resilient against our attacks. We investigate the root of its resilience and demonstrate that it comes from the properties that we introduced. Our attacks reduce the Recall (True Positive Rate) of the attacked schemes making them not able to correctly detect anomalies. Thus, the vulnerabilities we discovered in the anomaly detectors show that (despite an original good detection performance), those detectors are not able to reliably learn physical properties of the system. Even attacks that prior work was expected to be resilient against (based on verified properties) were found to be successful. We argue that our findings demonstrate the need for both more complete attacks in datasets, and more critical analysis of process-based anomaly detectors. We plan to release our implementation as open-source, together with an extension of two public datasets with a set of Synthetic Sensor Spoofing attacks as generated by our framework

    Towards Secure Deep Neural Networks for Cyber-Physical Systems

    Get PDF
    In recent years, deep neural networks (DNNs) are increasingly investigated in the literature to be employed in cyber-physical systems (CPSs). DNNs own inherent advantages in complex pattern identifying and achieve state-of-the-art performances in many important CPS applications. However, DNN-based systems usually require large datasets for model training, which introduces new data management issues. Meanwhile, research in the computer vision domain demonstrated that the DNNs are highly vulnerable to adversarial examples. Therefore, the security risks of employing DNNs in CPSs applications are of concern. In this dissertation, we study the security of employing DNNs in CPSs from both the data domain and learning domain. For the data domain, we study the data privacy issues of outsourcing the CPS data to cloud service providers (CSP). We design a space-efficient searchable symmetric encryption scheme that allows the user to query keywords over the encrypted CPS data that is stored in the cloud. After that, we study the security risks that adversarial machine learning (AML) can bring to the CPSs. Based on the attacker properties, we further separate AML in CPS into the customer domain and control domain. We analyze the DNN-based energy theft detection in advanced meter infrastructure as an example for customer domain attacks. The adversarial attacks to control domain CPS applications are more challenging and stringent. We then propose ConAML, a general AML framework that enables the attacker to generate adversarial examples under practical constraints. We evaluate the framework with three CPS applications in transportation systems, power grids, and water systems. To mitigate the threat of adversarial attacks, more robust DNNs are required for critical CPSs. We summarize the defense requirements for CPS applications and evaluate several typical defense mechanisms. For control domain adversarial attacks, we demonstrate that defensive methods like adversarial detection are not capable due to the practical attack requirements. We propose a random padding framework that can significantly increase the DNN robustness under adversarial attacks. The evaluation results show that our padding framework can reduce the effectiveness of adversarial examples in both customer domain and control domain applications
    corecore