20 research outputs found

    Line overload alleviations in wind energy integrated power systems using automatic generation control

    Get PDF
    Modern power systems are largely based on renewable energy sources, especially wind power. However, wind power, due to its intermittent nature and associated forecasting errors, requires an additional amount of balancing power provided through the automatic generation control (AGC) system. In normal operation, AGC dispatch is based on the fixed participation factor taking into account only the economic operation of generating units. However, large-scale injection of additional reserves results in large fluctuations of line power flows, which may overload the line and subsequently reduce the system security if AGC follows the fixed participation factor’s criteria. Therefore, to prevent the transmission line overloading, a dynamic dispatch strategy is required for the AGC system considering the capacities of the transmission lines along with the economic operation of generating units. This paper proposes a real-time dynamic AGC dispatch strategy, which protects the transmission line from overloading during the power dispatch process in an active power balancing operation. The proposed method optimizes the control of the AGC dispatch order to prevent power overflows in the transmission lines, which is achieved by considering how the output change of each generating unit affects the power flow in the associated bus system. Simulations are performed in Dig SILENT software by developing a 5 machine 8 bus Pakistan’s power system model integrating thermal power plant units, gas turbines, and wind power plant systems. Results show that the proposed AGC design efficiently avoids the transmission line congestions in highly wind-integrated power along with the economic operation of generating units

    A Hybrid Deep Learning-Based Model for Detection of Electricity Losses Using Big Data in Power Systems

    No full text
    Electricity theft harms smart grids and results in huge revenue losses for electric companies. Deep learning (DL), machine learning (ML), and statistical methods have been used in recent research studies to detect anomalies and illegal patterns in electricity consumption (EC) data collected by smart meters. In this paper, we propose a hybrid DL model for detecting theft activity in EC data. The model combines both a gated recurrent unit (GRU) and a convolutional neural network (CNN). The model distinguishes between legitimate and malicious EC patterns. GRU layers are used to extract temporal patterns, while the CNN is used to retrieve optimal abstract or latent patterns from EC data. Moreover, imbalance of data classes negatively affects the consistency of ML and DL. In this paper, an adaptive synthetic (ADASYN) method and TomekLinks are used to deal with the imbalance of data classes. In addition, the performance of the hybrid model is evaluated using a real-time EC dataset from the State Grid Corporation of China (SGCC). The proposed algorithm is computationally expensive, but on the other hand, it provides higher accuracy than the other algorithms used for comparison. With more and more computational resources available nowadays, researchers are focusing on algorithms that provide better efficiency in the face of widespread data. Various performance metrics such as F1-score, precision, recall, accuracy, and false positive rate are used to investigate the effectiveness of the hybrid DL model. The proposed model outperforms its counterparts with 0.985 Precision–Recall Area Under Curve (PR-AUC) and 0.987 Receiver Operating Characteristic Area Under Curve (ROC-AUC) for the data of EC

    A Hybrid Deep Learning-Based Model for Detection of Electricity Losses Using Big Data in Power Systems

    No full text
    Electricity theft harms smart grids and results in huge revenue losses for electric companies. Deep learning (DL), machine learning (ML), and statistical methods have been used in recent research studies to detect anomalies and illegal patterns in electricity consumption (EC) data collected by smart meters. In this paper, we propose a hybrid DL model for detecting theft activity in EC data. The model combines both a gated recurrent unit (GRU) and a convolutional neural network (CNN). The model distinguishes between legitimate and malicious EC patterns. GRU layers are used to extract temporal patterns, while the CNN is used to retrieve optimal abstract or latent patterns from EC data. Moreover, imbalance of data classes negatively affects the consistency of ML and DL. In this paper, an adaptive synthetic (ADASYN) method and TomekLinks are used to deal with the imbalance of data classes. In addition, the performance of the hybrid model is evaluated using a real-time EC dataset from the State Grid Corporation of China (SGCC). The proposed algorithm is computationally expensive, but on the other hand, it provides higher accuracy than the other algorithms used for comparison. With more and more computational resources available nowadays, researchers are focusing on algorithms that provide better efficiency in the face of widespread data. Various performance metrics such as F1-score, precision, recall, accuracy, and false positive rate are used to investigate the effectiveness of the hybrid DL model. The proposed model outperforms its counterparts with 0.985 Precision–Recall Area Under Curve (PR-AUC) and 0.987 Receiver Operating Characteristic Area Under Curve (ROC-AUC) for the data of EC

    Modified Red Fox Optimizer With Deep Learning Enabled False Data Injection Attack Detection

    No full text
    Recently, power systems are drastically developed and shifted towards cyber-physical power systems (CPPS). The CPPS involve numerous sensor devices which generates enormous quantities of information. The data gathered from each sensing component needs to accomplish to reliability which are highly prone to attacks. Amongst various kinds of attacks, false data injection attack (FDIA) can seriously affects energy efficiency of CPPS. Current data driven approach utilized for designing FDIA frequently depends on distinct environmental and assumption conditions making them unrealistic and ineffective. In this paper, we present a modified Red Fox Optimizer with Deep Learning enabled FDIA detection (MRFODL-FDIAD) in the CPPS environment. The presented MRFODL-FDIAD technique mainly detects and classifies FDIAs in the CPPS environment. It encompasses a three stage process namely pre-processing, detection, and parameter tuning. For FDIA detection, the MRFODL-FDIAD technique uses multihead attention-based long short term memory (MBALSTM) technique. To improve the detection performance of the MBALSTM model, the MRFO technique can be exploited in this study. The experimental evaluation of the MRFODL-FDIAD approach was performed on standard IEEE bus system. Extensive set of experimentations highlighted the supremacy of the MRFODL-FDIAD technique

    A Classy Multifacet Clustering and Fused Optimization Based Classification Methodologies for SCADA Security

    No full text
    Detecting intrusions from the supervisory control and data acquisition (SCADA) systems is one of the most essential and challenging processes in recent times. Most of the conventional works aim to develop an efficient intrusion detection system (IDS) framework for increasing the security of SCADA against networking attacks. Nonetheless, it faces the problems of complexity in classification, requiring more time for training and testing, as well as increased misprediction results and error outputs. Hence, this research work intends to develop a novel IDS framework by implementing a combination of methodologies, such as clustering, optimization, and classification. The most popular and extensively utilized SCADA attacking datasets are taken for this system’s proposed IDS framework implementation and validation. The main contribution of this work is to accurately detect the intrusions from the given SCADA datasets with minimized computational operations and increased accuracy of classification. Additionally the proposed work aims to develop a simple and efficient classification technique for improving the security of SCADA systems. Initially, the dataset preprocessing and clustering processes were performed using the multifacet data clustering model (MDCM) in order to simplify the classification process. Then, the hybrid gradient descent spider monkey optimization (GDSMO) mechanism is implemented for selecting the optimal parameters from the clustered datasets, based on the global best solution. The main purpose of using the optimization methodology is to train the classifier with the optimized features to increase accuracy and reduce processing time. Moreover, the deep sequential long short term memory (DS-LSTM) is employed to identify the intrusions from the clustered datasets with efficient data model training. Finally, the proposed optimization-based classification methodology’s performance and results are validated and compared using various evaluation metrics

    Optimal synergic deep learning for COVID-19 classification using chest x-ray images

    Get PDF
    A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs. Chest X-ray (CXR) gained much interest after the COVID-19 outbreak thanks to its rapid imaging time, widespread availability, low cost, and portability. In radiological investigations, computer-aided diagnostic tools are implemented to reduce intra- and inter-observer variability. Using lately industrialized Artificial Intelligence (AI) algorithms and radiological techniques to diagnose and classify disease is advantageous. The current study develops an automatic identification and classification model for CXR pictures using Gaussian Filtering based Optimized Synergic Deep Learning using Remora Optimization Algorithm (GF-OSDL-ROA). This method is inclusive of preprocessing and classification based on optimization. The data is preprocessed using Gaussian filtering (GF) to remove any extraneous noise from the image’s edges. Then, the OSDL model is applied to classify the CXRs under different severity levels based on CXR data. The learning rate of OSDL is optimized with the help of ROA for COVID-19 diagnosis showing the novelty of the work. OSDL model, applied in this study, was validated using the COVID-19 dataset. The experiments were conducted upon the proposed OSDL model, which achieved a classification accuracy of 99.83%, while the current Convolutional Neural Network achieved less classification accuracy, i.e., 98.14%

    A Classy Multifacet Clustering and Fused Optimization Based Classification Methodologies for SCADA Security

    No full text
    Detecting intrusions from the supervisory control and data acquisition (SCADA) systems is one of the most essential and challenging processes in recent times. Most of the conventional works aim to develop an efficient intrusion detection system (IDS) framework for increasing the security of SCADA against networking attacks. Nonetheless, it faces the problems of complexity in classification, requiring more time for training and testing, as well as increased misprediction results and error outputs. Hence, this research work intends to develop a novel IDS framework by implementing a combination of methodologies, such as clustering, optimization, and classification. The most popular and extensively utilized SCADA attacking datasets are taken for this system’s proposed IDS framework implementation and validation. The main contribution of this work is to accurately detect the intrusions from the given SCADA datasets with minimized computational operations and increased accuracy of classification. Additionally the proposed work aims to develop a simple and efficient classification technique for improving the security of SCADA systems. Initially, the dataset preprocessing and clustering processes were performed using the multifacet data clustering model (MDCM) in order to simplify the classification process. Then, the hybrid gradient descent spider monkey optimization (GDSMO) mechanism is implemented for selecting the optimal parameters from the clustered datasets, based on the global best solution. The main purpose of using the optimization methodology is to train the classifier with the optimized features to increase accuracy and reduce processing time. Moreover, the deep sequential long short term memory (DS-LSTM) is employed to identify the intrusions from the clustered datasets with efficient data model training. Finally, the proposed optimization-based classification methodology’s performance and results are validated and compared using various evaluation metrics

    Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication

    No full text
    Wireless sensor networks (WSNs) have been developed recently to support several applications, including environmental monitoring, traffic control, smart battlefield, home automation, etc. WSNs include numerous sensors that can be dispersed around a specific node to achieve the computing process. In WSNs, routing becomes a very significant task that should be managed prudently. The main purpose of a routing algorithm is to send data between sensor nodes (SNs) and base stations (BS) to accomplish communication. A good routing protocol should be adaptive and scalable to the variations in network topologies. Therefore, a scalable protocol has to execute well when the workload increases or the network grows larger. Many complexities in routing involve security, energy consumption, scalability, connectivity, node deployment, and coverage. This article introduces a wavelet mutation with Aquila optimization-based routing (WMAO-EAR) protocol for wireless communication. The presented WMAO-EAR technique aims to accomplish an energy-aware routing process in WSNs. To do this, the WMAO-EAR technique initially derives the WMAO algorithm for the integration of wavelet mutation with the Aquila optimization (AO) algorithm. A fitness function is derived using distinct constraints, such as delay, energy, distance, and security. By setting a mutation probability P, every individual next to the exploitation and exploration phase process has the probability of mutation using the wavelet mutation process. For demonstrating the enhanced performance of the WMAO-EAR technique, a comprehensive simulation analysis is made. The experimental outcomes establish the betterment of the WMAO-EAR method over other recent approaches

    Modeling of Botnet Detection Using Barnacles Mating Optimizer with Machine Learning Model for Internet of Things Environment

    No full text
    Owing to the development and expansion of energy-aware sensing devices and autonomous and intelligent systems, the Internet of Things (IoT) has gained remarkable growth and found uses in several day-to-day applications. However, IoT devices are highly prone to botnet attacks. To mitigate this threat, a lightweight and anomaly-based detection mechanism that can create profiles for malicious and normal actions on IoT networks could be developed. Additionally, the massive volume of data generated by IoT gadgets could be analyzed by machine learning (ML) methods. Recently, several deep learning (DL)-related mechanisms have been modeled to detect attacks on the IoT. This article designs a botnet detection model using the barnacles mating optimizer with machine learning (BND-BMOML) for the IoT environment. The presented BND-BMOML model focuses on the identification and recognition of botnets in the IoT environment. To accomplish this, the BND-BMOML model initially follows a data standardization approach. In the presented BND-BMOML model, the BMO algorithm is employed to select a useful set of features. For botnet detection, the BND-BMOML model in this study employs an Elman neural network (ENN) model. Finally, the presented BND-BMOML model uses a chicken swarm optimization (CSO) algorithm for the parameter tuning process, demonstrating the novelty of the work. The BND-BMOML method was experimentally validated using a benchmark dataset and the outcomes indicated significant improvements in performance over existing methods

    Road Damage Detection Using the Hunger Games Search with Elman Neural Network on High-Resolution Remote Sensing Images

    No full text
    Roads can be significant traffic lifelines that can be damaged by collapsed tree branches, landslide rubble, and buildings debris. Thus, road damage detection and evaluation by utilizing High-Resolution Remote Sensing Images (RSI) are highly important to maintain routes in optimal conditions and execute rescue operations. Detecting damaged road areas through high-resolution aerial images could promote faster and effectual disaster management and decision making. Several techniques for the prediction and detection of road damage caused by earthquakes are available. Recently, computer vision (CV) techniques have appeared as an optimal solution for road damage automated inspection. This article presents a new Road Damage Detection modality using the Hunger Games Search with Elman Neural Network (RDD–HGSENN) on High-Resolution RSIs. The presented RDD–HGSENN technique mainly aims to determine road damages using RSIs. In the presented RDD–HGSENN technique, the RetinaNet model was applied for damage detection on a road. In addition, the RDD–HGSENN technique can perform road damage classification using the ENN model. To tune the ENN parameters automatically, the HGS algorithm was exploited in this work. To examine the enhanced outcomes of the presented RDD–HGSENN technique, a comprehensive set of simulations were conducted. The experimental outcomes demonstrated the improved performance of the RDD–HGSENN technique with respect to recent approaches in relation to several measures
    corecore