145 research outputs found
Improving SIEM for critical SCADA water infrastructures using machine learning
Network Control Systems (NAC) have been used in many industrial processes. They aim to reduce the human factor burden and efficiently handle the complex process and communication of those systems. Supervisory control and data acquisition (SCADA) systems are used in industrial, infrastructure and facility processes (e.g. manufacturing, fabrication, oil and water pipelines, building ventilation, etc.) Like other Internet of Things (IoT) implementations, SCADA systems are vulnerable to cyber-attacks, therefore, a robust anomaly detection is a major requirement. However, having an accurate anomaly detection system is not an easy task, due to the difficulty to differentiate between cyber-attacks and system internal failures (e.g. hardware failures). In this paper, we present a model that detects anomaly events in a water system controlled by SCADA. Six Machine Learning techniques have been used in building and evaluating the model. The model classifies different anomaly events including hardware failures (e.g. sensor failures), sabotage and cyber-attacks (e.g. DoS and Spoofing). Unlike other detection systems, our proposed work helps in accelerating the mitigation process by notifying the operator with additional information when an anomaly occurs. This additional information includes the probability and confidence level of event(s) occurring. The model is trained and tested using a real-world dataset
Learning-guided network fuzzing for testing cyber-physical system defences
The threat of attack faced by cyber-physical systems (CPSs), especially when
they play a critical role in automating public infrastructure, has motivated
research into a wide variety of attack defence mechanisms. Assessing their
effectiveness is challenging, however, as realistic sets of attacks to test
them against are not always available. In this paper, we propose smart fuzzing,
an automated, machine learning guided technique for systematically finding
'test suites' of CPS network attacks, without requiring any knowledge of the
system's control programs or physical processes. Our approach uses predictive
machine learning models and metaheuristic search algorithms to guide the
fuzzing of actuators so as to drive the CPS into different unsafe physical
states. We demonstrate the efficacy of smart fuzzing by implementing it for two
real-world CPS testbeds---a water purification plant and a water distribution
system---finding attacks that drive them into 27 different unsafe states
involving water flow, pressure, and tank levels, including six that were not
covered by an established attack benchmark. Finally, we use our approach to
test the effectiveness of an invariant-based defence system for the water
treatment plant, finding two attacks that were not detected by its physical
invariant checks, highlighting a potential weakness that could be exploited in
certain conditions.Comment: Accepted by ASE 201
Anomaly detection for a water treatment system using unsupervised machine learning
National Research Foundation (NRF) Singapor
Control Behavior Integrity for Distributed Cyber-Physical Systems
Cyber-physical control systems, such as industrial control systems (ICS), are
increasingly targeted by cyberattacks. Such attacks can potentially cause
tremendous damage, affect critical infrastructure or even jeopardize human life
when the system does not behave as intended. Cyberattacks, however, are not new
and decades of security research have developed plenty of solutions to thwart
them. Unfortunately, many of these solutions cannot be easily applied to
safety-critical cyber-physical systems. Further, the attack surface of ICS is
quite different from what can be commonly assumed in classical IT systems.
We present Scadman, a system with the goal to preserve the Control Behavior
Integrity (CBI) of distributed cyber-physical systems. By observing the
system-wide behavior, the correctness of individual controllers in the system
can be verified. This allows Scadman to detect a wide range of attacks against
controllers, like programmable logic controller (PLCs), including malware
attacks, code-reuse and data-only attacks. We implemented and evaluated Scadman
based on a real-world water treatment testbed for research and training on ICS
security. Our results show that we can detect a wide range of
attacks--including attacks that have previously been undetectable by typical
state estimation techniques--while causing no false-positive warning for
nominal threshold values.Comment: 15 pages, 8 figure
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions
Cyber-physical systems (CPSs) are widespread in critical domains, and
significant damage can be caused if an attacker is able to modify the code of
their programmable logic controllers (PLCs). Unfortunately, traditional
techniques for attesting code integrity (i.e. verifying that it has not been
modified) rely on firmware access or roots-of-trust, neither of which
proprietary or legacy PLCs are likely to provide. In this paper, we propose a
practical code integrity checking solution based on privacy-preserving black
box models that instead attest the input/output behaviour of PLC programs.
Using faithful offline copies of the PLC programs, we identify their most
important inputs through an information flow analysis, execute them on multiple
combinations to collect data, then train neural networks able to predict PLC
outputs (i.e. actuator commands) from their inputs. By exploiting the black box
nature of the model, our solution maintains the privacy of the original PLC
code and does not assume that attackers are unaware of its presence. The trust
instead comes from the fact that it is extremely hard to attack the PLC code
and neural networks at the same time and with consistent outcomes. We evaluated
our approach on a modern six-stage water treatment plant testbed, finding that
it could predict actuator states from PLC inputs with near-100% accuracy, and
thus could detect all 120 effective code mutations that we subjected the PLCs
to. Finally, we found that it is not practically possible to simultaneously
modify the PLC code and apply discreet adversarial noise to our attesters in a
way that leads to consistent (mis-)predictions.Comment: Accepted by the 29th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE
2021
- …