105 research outputs found

    Performance Analysis Of Data-Driven Algorithms In Detecting Intrusions On Smart Grid

    Get PDF
    The traditional power grid is no longer a practical solution for power delivery due to several shortcomings, including chronic blackouts, energy storage issues, high cost of assets, and high carbon emissions. Therefore, there is a serious need for better, cheaper, and cleaner power grid technology that addresses the limitations of traditional power grids. A smart grid is a holistic solution to these issues that consists of a variety of operations and energy measures. This technology can deliver energy to end-users through a two-way flow of communication. It is expected to generate reliable, efficient, and clean power by integrating multiple technologies. It promises reliability, improved functionality, and economical means of power transmission and distribution. This technology also decreases greenhouse emissions by transferring clean, affordable, and efficient energy to users. Smart grid provides several benefits, such as increasing grid resilience, self-healing, and improving system performance. Despite these benefits, this network has been the target of a number of cyber-attacks that violate the availability, integrity, confidentiality, and accountability of the network. For instance, in 2021, a cyber-attack targeted a U.S. power system that shut down the power grid, leaving approximately 100,000 people without power. Another threat on U.S. Smart Grids happened in March 2018 which targeted multiple nuclear power plants and water equipment. These instances represent the obvious reasons why a high level of security approaches is needed in Smart Grids to detect and mitigate sophisticated cyber-attacks. For this purpose, the US National Electric Sector Cybersecurity Organization and the Department of Energy have joined their efforts with other federal agencies, including the Cybersecurity for Energy Delivery Systems and the Federal Energy Regulatory Commission, to investigate the security risks of smart grid networks. Their investigation shows that smart grid requires reliable solutions to defend and prevent cyber-attacks and vulnerability issues. This investigation also shows that with the emerging technologies, including 5G and 6G, smart grid may become more vulnerable to multistage cyber-attacks. A number of studies have been done to identify, detect, and investigate the vulnerabilities of smart grid networks. However, the existing techniques have fundamental limitations, such as low detection rates, high rates of false positives, high rates of misdetection, data poisoning, data quality and processing, lack of scalability, and issues regarding handling huge volumes of data. Therefore, these techniques cannot ensure safe, efficient, and dependable communication for smart grid networks. Therefore, the goal of this dissertation is to investigate the efficiency of machine learning in detecting cyber-attacks on smart grids. The proposed methods are based on supervised, unsupervised machine and deep learning, reinforcement learning, and online learning models. These models have to be trained, tested, and validated, using a reliable dataset. In this dissertation, CICDDoS 2019 was used to train, test, and validate the efficiency of the proposed models. The results show that, for supervised machine learning models, the ensemble models outperform other traditional models. Among the deep learning models, densely neural network family provides satisfactory results for detecting and classifying intrusions on smart grid. Among unsupervised models, variational auto-encoder, provides the highest performance compared to the other unsupervised models. In reinforcement learning, the proposed Capsule Q-learning provides higher detection and lower misdetection rates, compared to the other model in literature. In online learning, the Online Sequential Euclidean Distance Routing Capsule Network model provides significantly better results in detecting intrusion attacks on smart grid, compared to the other deep online models

    Topological changes in data-driven dynamic security assessment for power system control

    Get PDF
    The integration of renewable energy sources into the power system requires new operating paradigms. The higher uncertainty in generation and demand makes the operations much more dynamic than in the past. Novel operating approaches that consider these new dynamics are needed to operate the system close to its physical limits and fully utilise the existing grid assets. Otherwise, expensive investments in redundant grid infrastructure become necessary. This thesis reviews the key role of digitalisation in the shift toward a decarbonised and decentralised power system. Algorithms based on advanced data analytic techniques and machine learning are investigated to operate the system assets at the full capacity while continuously assessing and controlling security. The impact of topological changes on the performance of these data-driven approaches is studied and algorithms to mitigate this impact are proposed. The relevance of this study resides in the increasingly higher frequency of topological changes in modern power systems and in the need to improve the reliability of digitalised approaches against such changes to reduce the risks of relying on them. A novel physics-informed approach to select the most relevant variables (or features) to the dynamic security of the system is first proposed and then used in two different three-stages workflows. In the first workflow, the proposed feature selection approach allows to train classification models from machine learning (or classifiers) close to real-time operation improving their accuracy and robustness against uncertainty. In the second workflow, the selected features are used to define a new metric to detect high-impact topological changes and train new classifiers in response to such changes. Subsequently, the potential of corrective control for a dynamically secure operation is investigated. By using a neural network to learn the safety certificates for the post-fault system, the corrective control is combined with preventive control strategies to maintain the system security and at the same time reduce operational costs and carbon emissions. Finally, exemplary changes in assumptions for data-driven dynamic security assessment when moving from high inertia to low inertia systems are questioned, confirming that using machine learning based models will make significantly more sense in future systems. Future research directions in terms of data generation and model reliability of advanced digitalised approaches for dynamic security assessment and control are finally indicated.Open Acces

    Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review

    Get PDF
    Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems

    3D Object Recognition Based On Constrained 2D Views

    Get PDF
    The aim of the present work was to build a novel 3D object recognition system capable of classifying man-made and natural objects based on single 2D views. The approach to this problem has been one motivated by recent theories on biological vision and multiresolution analysis. The project's objectives were the implementation of a system that is able to deal with simple 3D scenes and constitutes an engineering solution to the problem of 3D object recognition, allowing the proposed recognition system to operate in a practically acceptable time frame. The developed system takes further the work on automatic classification of marine phytoplank- (ons, carried out at the Centre for Intelligent Systems, University of Plymouth. The thesis discusses the main theoretical issues that prompted the fundamental system design options. The principles and the implementation of the coarse data channels used in the system are described. A new multiresolution representation of 2D views is presented, which provides the classifier module of the system with coarse-coded descriptions of the scale-space distribution of potentially interesting features. A multiresolution analysis-based mechanism is proposed, which directs the system's attention towards potentially salient features. Unsupervised similarity-based feature grouping is introduced, which is used in coarse data channels to yield feature signatures that are not spatially coherent and provide the classifier module with salient descriptions of object views. A simple texture descriptor is described, which is based on properties of a special wavelet transform. The system has been tested on computer-generated and natural image data sets, in conditions where the inter-object similarity was monitored and quantitatively assessed by human subjects, or the analysed objects were very similar and their discrimination constituted a difficult task even for human experts. The validity of the above described approaches has been proven. The studies conducted with various statistical and artificial neural network-based classifiers have shown that the system is able to perform well in all of the above mentioned situations. These investigations also made possible to take further and generalise a number of important conclusions drawn during previous work carried out in the field of 2D shape (plankton) recognition, regarding the behaviour of multiple coarse data channels-based pattern recognition systems and various classifier architectures. The system possesses the ability of dealing with difficult field-collected images of objects and the techniques employed by its component modules make possible its extension to the domain of complex multiple-object 3D scene recognition. The system is expected to find immediate applicability in the field of marine biota classification

    Machine Learning Methods to Exploit the Predictive Power of Open, High, Low, Close (OHLC) Data

    Get PDF
    Novel machine learning techniques are developed for the prediction of financial markets, with a combination of supervised, unsupervised and Bayesian optimisation machine learning methods shown able to give a predictive power rarely previously observed. A new data mining technique named Deep Candlestick Mining (DCM) is proposed that is able to discover highly predictive dataset specific candlestick patterns (arrangements of open, high, low, close (OHLC) aggregated price data structures) which significantly outperform traditional candlestick patterns. The power that OHLC features can provide is further investigated, using LSTM RNNs and XGBoost trees, in the prediction of a mid-price directional change, defined here as the mid-point between either the open and close or high and low of an OHLC bar. This target variable has been overlooked in the literature, which is surprising given the relative ease of predicting it, significantly in excess of noisier financial quantities. However, the true value of this quantity is only known upon the period's ending – i.e. it is an after-the-fact observation. To make use of and enhance the remarkable predictability of the mid-price directional change, multi-period predictions are investigated by training many LSTM RNNs (XGBoost trees being used to identify powerful OHLC input feature combinations), over different time horizons, to construct a Bayesian optimised trend prediction ensemble. This fusion of long-, medium- and short-term information results in a model capable of predicting market trend direction to greater than 70% better than random. A trading strategy is constructed to demonstrate how this predictive power can be used by exploiting an artefact of the LSTM RNN training process which allows the trading system to size and place trades in accordance with the ensemble's predictive certainty

    Microgrid Formation-based Service Restoration Using Deep Reinforcement Learning and Optimal Switch Placement in Distribution Networks

    Get PDF
    A power distribution network that demonstrates resilience has the ability to minimize the duration and severity of power outages, ensure uninterrupted service delivery, and enhance overall reliability. Resilience in this context refers to the network's capacity to withstand and quickly recover from disruptive events, such as equipment failures, natural disasters, or cyber attacks. By effectively mitigating the effects of such incidents, a resilient power distribution network can contribute to enhanced operational performance, customer satisfaction, and economic productivity. The implementation of microgrids as a response to power outages constitutes a viable approach for enhancing the resilience of the system. In this work, a novel method for service restoration based on dynamic microgrid formation and deep reinforcement learning is proposed. To this end, microgrid formation-based service restoration is formulated as a Markov decision process. Then, by utilizing the node cell and route model concept, every distributed generation unit equipped with the black-start capability traverses the power system, thereby restoring power to the lines and nodes it visits. The deep Q-network is employed as a means to achieve optimal policy control, which guides agents in the selection of node cells that result in maximum load pick-up while adhering to operational constraints. In the next step, a solution has been proposed for the switch placement problem in distribution networks, which results in a substantial improvement in service restoration. Accordingly, an effective algorithm, utilizing binary particle swarm optimization, is employed to optimize the placement of switches in distribution networks. The input data necessary for the proposed algorithm comprises information related to the power system topology and load point data. The fitness of the solution is assessed by minimizing the unsupplied loads and the number of switches placed in distribution networks. The proposed methods are validated using a large-scale unbalanced distribution system consisting of 404 nodes, which is operated by Saskatoon Light and Power, a local utility in Saskatoon, Canada. Additionally, a balanced IEEE 33-node test system is also utilized for validation purposes

    Data mining using intelligent systems : an optimized weighted fuzzy decision tree approach

    Get PDF
    Data mining can be said to have the aim to analyze the observational datasets to find relationships and to present the data in ways that are both understandable and useful. In this thesis, some existing intelligent systems techniques such as Self-Organizing Map, Fuzzy C-means and decision tree are used to analyze several datasets. The techniques are used to provide flexible information processing capability for handling real-life situations. This thesis is concerned with the design, implementation, testing and application of these techniques to those datasets. The thesis also introduces a hybrid intelligent systems technique: Optimized Weighted Fuzzy Decision Tree (OWFDT) with the aim of improving Fuzzy Decision Trees (FDT) and solving practical problems. This thesis first proposes an optimized weighted fuzzy decision tree, incorporating the introduction of Fuzzy C-Means to fuzzify the input instances but keeping the expected labels crisp. This leads to a different output layer activation function and weight connection in the neural network (NN) structure obtained by mapping the FDT to the NN. A momentum term was also introduced into the learning process to train the weight connections to avoid oscillation or divergence. A new reasoning mechanism has been also proposed to combine the constructed tree with those weights which had been optimized in the learning process. This thesis also makes a comparison between the OWFDT and two benchmark algorithms, Fuzzy ID3 and weighted FDT. SIx datasets ranging from material science to medical and civil engineering were introduced as case study applications. These datasets involve classification of composite material failure mechanism, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) signals, eye bacteria prediction and wave overtopping prediction. Different intelligent systems techniques were used to cluster the patterns and predict the classes although OWFDT was used to design classifiers for all the datasets. In the material dataset, Self-Organizing Map and Fuzzy C-Means were used to cluster the acoustic event signals and classify those events to different failure mechanism, after the classification, OWFDT was introduced to design a classifier in an attempt to classify acoustic event signals. For the eye bacteria dataset, we use the bagging technique to improve the classification accuracy of Multilayer Perceptrons and Decision Trees. Bootstrap aggregating (bagging) to Decision Tree also helped to select those most important sensors (features) so that the dimension of the data could be reduced. Those features which were most important were used to grow the OWFDT and the curse of dimensionality problem could be solved using this approach. The last dataset, which is concerned with wave overtopping, was used to benchmark OWFDT with some other Intelligent Systems techniques, such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Genetic Neural Mathematical Method (GNMM) and Fuzzy ARTMAP. Through analyzing these datasets using these Intelligent Systems Techniques, it has been shown that patterns and classes can be found or can be classified through combining those techniques together. OWFDT has also demonstrated its efficiency and effectiveness as compared with a conventional fuzzy Decision Tree and weighted fuzzy Decision Tree

    Security in Distributed, Grid, Mobile, and Pervasive Computing

    Get PDF
    This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security

    Resilience-Based Asset Management Framework for Pavement Maintenance and Rehabilitation

    Get PDF
    Infrastructure systems play a pivotal role in developing the economy and public services, which positively affects the quality of life of the communities. Thus, it is of paramount importance to investigate the current infrastructure capacity, assess its capability to sustain the anticipated disruptions, then plan the necessary recovery strategies to reduce their detrimental significance and increase their resilience. The growing decline in roads condition has recently grasped the attention of numerous researchers and practitioners regarding road resiliency during its life-cycle. 62.6% of roads in Canada are in good condition, according to Canada Infrastructure Report (2016). Nevertheless, with current investment rates, significant road networks will suffer a decline in their condition and will be vulnerable to sudden failure (FCM 2016). On the other side, the current situation in the U.S is inferior, where roads are in poor condition, classified as grade D, and not to mention the insufficient investment required to maintain road networks (ASCE, 2017). Accordingly, this research tackles pavement resilience from an asset management perspective where; it highlights the fact that infrastructure should maintain its resiliency during its life-cycle to maintain a minimum acceptable Level of Service (LOS). The main objective of this research is to develop a resilience-based asset management framework for pavement maintenance and rehabilitation (M&R). The proposed methodology involves a set of sequential steps as follows; 1) define infrastructure resilience, 2) investigate resilience-related indicators in the same dimension of resilience definition, 3) develop a resilience-based asset management model for M&R decisions, 4) optimize the attained M&R plan for short and long-term decisions, and 5) formulate a resilience index. First, resilience is defined based on a comprehensive review of the previous literature and targeting an integrated definition that combines both asset management and resilience concepts. Then, resilience-associated indicators are investigated based on the predefined resilience definition, and the different indicators are later classified and modeled for a pavement network. The resilience-based asset management model is carried out through the development of five components; 1) a central database of asset inventory that includes numerous data that would serve as input for the proposed model, 2) a pavement condition and level of service (LOS) assessment models that encompass the different effects of climatic conditions on pavement condition, surface, and structural conditions, and LOS, 3) regression modeling of the effect of Freeze-Thaw on pavement and investigation of flooding effect on both pavement surface and structural conditions, 4) financial and temporal models for recovery/intervention actions are formulated through computational models that account for the intervention costs and time, then link them to the later used optimization model, and 5) an optimization model to formulate the mathematical problem for the proposed resilience assessment approach and integrate the formerly-mentioned components. The utilized optimization model employs a single objective that relies on a combination of meta-heuristic rules. Genetic algorithms are utilized as an innovative idea that formulates the mathematical denotation for the proposed resilience definition. Principle Components Analysis (PCA) is used and manipulated as a novel method to establish resilience indicators’ weights and compute the resilience index. A PCA framework is developed based on optimization model output to generate the required weights for the desired resilience index. This model offers dynamic resilience indicators’ weights and, therefore, a dynamic resilience index. Resiliency is a dynamic feature for infrastructure systems, where it differs during their lifecycle with the change in maintenance and rehabilitation plans, systems retrofit, and the occurring disruptive events throughout their life-cycle. The proposed model serves as an initial step toward providing more resilient municipal infrastructures. The model emphasizes that recovery plans should follow proactive measures to adapt to sudden or unforeseen events rather than just adopting a reactive approach, which deals with the sudden events after their occurrence. This pavement resilience assessment framework is also beneficial for asset management experts. M&R plans would not only target enhancing or restoring pavement condition or LOS but also incorporate the implementation of proper recovery strategies for both regular and extreme events into the M&R plan while taking the natural deterioration and aging effects into account. Two case studies were undertaken to demonstrate the effectiveness of the proposed methodology
    • …
    corecore