312 research outputs found
Detection of Non-Technical Losses in Smart Distribution Networks: a Review
With the advent of smart grids, distribution utilities have
initiated a large deployment of smart meters on the premises of the
consumers. The enormous amount of data obtained from the consumers
and communicated to the utility give new perspectives and possibilities
for various analytics-based applications. In this paper the current
smart metering-based energy-theft detection schemes are reviewed and
discussed according to two main distinctive categories: A) system statebased,
and B) arti cial intelligence-based.Comisión Europea FP7-PEOPLE-2013-IT
Smart Grid Security: Threats, Challenges, and Solutions
The cyber-physical nature of the smart grid has rendered it vulnerable to a
multitude of attacks that can occur at its communication, networking, and
physical entry points. Such cyber-physical attacks can have detrimental effects
on the operation of the grid as exemplified by the recent attack which caused a
blackout of the Ukranian power grid. Thus, to properly secure the smart grid,
it is of utmost importance to: a) understand its underlying vulnerabilities and
associated threats, b) quantify their effects, and c) devise appropriate
security solutions. In this paper, the key threats targeting the smart grid are
first exposed while assessing their effects on the operation and stability of
the grid. Then, the challenges involved in understanding these attacks and
devising defense strategies against them are identified. Potential solution
approaches that can help mitigate these threats are then discussed. Last, a
number of mathematical tools that can help in analyzing and implementing
security solutions are introduced. As such, this paper will provide the first
comprehensive overview on smart grid security
Machine Learning in Adversarial Environments
Machine Learning, especially Deep Neural Nets (DNNs), has achieved great success in a variety of applications. Unlike classical algorithms that could be formally analyzed, there is less understanding of neural network-based learning algorithms. This lack of understanding through either formal methods or empirical observations results in potential vulnerabilities that could be exploited by adversaries. This also hinders the deployment and adoption of learning methods in security-critical systems.
Recent works have demonstrated that DNNs are vulnerable to carefully crafted adversarial perturbations. We refer to data instances with added adversarial perturbations as “adversarial examples”. Such adversarial examples can mislead DNNs to produce adversary-selected results. Furthermore, it can cause a DNN system to misbehavior in unexpected and potentially dangerous ways. In this context, in this thesis, we focus on studying the security problem of current DNNs from the viewpoints of both attack and defense.
First, we explore the space of attacks against DNNs during the test time. We revisit the integrity of Lp regime and propose a new and rigorous threat model of adversarial examples. Based on this new threat model, we present the technique to generate adversarial examples in the digital space.
Second, we study the physical consequence of adversarial examples in the 3D and physical spaces. We first study the vulnerabilities of various vision systems by simulating the photo0taken process by using the physical renderer. To further explore the physical consequence in the real world, we select the safety-critical application of autonomous driving as the target system and study the vulnerability of the LiDAR-perceptual module. These studies show the potentially severe consequences of adversarial examples and raise awareness on its risks.
Last but not least, we develop solutions to defend against adversarial examples. We propose a consistency-check based method to detect adversarial examples by leveraging property of either the learning model or the data. We show two examples in the segmentation task (leveraging learning model) and video data (leveraging the data), respectively.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162944/1/xiaocw_1.pd
Malicious data detection and localization in state estimation leveraging system losses
In power systems, economic dispatch, contingency analysis, and the detection of faulty equipment rely on the output of the state estimator. Typically, state estimations are made based on the network topology information and the measurements from a set of sensors within the network. The state estimates must be accurate even with the presence of corrupted measurements. Traditional techniques used to detect and identify bad sensor measurements in state estimation cannot thwart malicious sensor measurement modifications, such as malicious data injection attacks. Recent work by Niemira (2013) has compared real and reactive injection and flow measurements as indicators of attacks. In this work, we improve upon the method used in that work to further enhance the detectability of malicious data injection attacks, and to incorporate PMU measurements to detect and locate previously undetectable attacks
Anomaly detection through User Behaviour Analysis
The rise in cyber-attacks and cyber-crime is causing more and more organizations and individuals to consider the correct implementation of their security systems. The consequences of a security breach can be devastating, ranging from loss of public confidence to bankruptcy. Traditional techniques for detecting and stopping malware rely on building a database of known signatures using known samples of malware. However, these techniques are not very effective at detecting zero-day exploits because there are no samples in their malware signature databases. The limitation of not being able to detect zero-day exploits leaves organisations vulnerable to new and evolving malware threats. To address this challenge, this thesis proposes a novel approach to malware detection using machine learning techniques. The proposed approach creates a user profile that trains a machine learning model using only normal user behaviour data, and detects malware by identifying deviations from this profile. In this way, the proposed approach can detect zero-day malware and other previously unknown threats without having a specific database of malware signatures. The proposed approach is evaluated using real-world datasets, and different machine learning algorithms are compared to evaluate their performance in detecting unknown threats. The results show that the proposed approach is effective in detecting malware, achieving high accuracy and low false positive rates. This thesis contributes to the field of malware detection by providing a new perspective and approach that complements existing methods, and has the potential to improve the overall security of organisations and individuals in the face of evolving cybersecurity threats
Malicious Agent Detection for Robust Multi-Agent Collaborative Perception
Recently, multi-agent collaborative (MAC) perception has been proposed and
outperformed the traditional single-agent perception in many applications, such
as autonomous driving. However, MAC perception is more vulnerable to
adversarial attacks than single-agent perception due to the information
exchange. The attacker can easily degrade the performance of a victim agent by
sending harmful information from a malicious agent nearby. In this paper, we
extend adversarial attacks to an important perception task -- MAC object
detection, where generic defenses such as adversarial training are no longer
effective against these attacks. More importantly, we propose Malicious Agent
Detection (MADE), a reactive defense specific to MAC perception that can be
deployed by each agent to accurately detect and then remove any potential
malicious agent in its local collaboration network. In particular, MADE
inspects each agent in the network independently using a semi-supervised
anomaly detector based on a double-hypothesis test with the Benjamini-Hochberg
procedure to control the false positive rate of the inference. For the two
hypothesis tests, we propose a match loss statistic and a collaborative
reconstruction loss statistic, respectively, both based on the consistency
between the agent to be inspected and the ego agent where our detector is
deployed. We conduct comprehensive evaluations on a benchmark 3D dataset
V2X-sim and a real-road dataset DAIR-V2X and show that with the protection of
MADE, the drops in the average precision compared with the best-case "oracle"
defender against our attack are merely 1.28% and 0.34%, respectively, much
lower than 8.92% and 10.00% for adversarial training, respectively
- …