22 research outputs found

    Electroencephalogram Based Causality Graph Analysis in Behavior Tasks of Parkinson’s Disease Patients

    Get PDF
    Electroencephalographic (EEG) signals of the human brains represent electrical activities for a number of channels recorded over a the scalp. The main purpose of this thesis is to investigate the interactions and causality of different parts of a brain using EEG signals recorded during a performance subjects of verbal fluency tasks. Subjects who have Parkinson\u27s Disease (PD) have difficulties with mental tasks, such as switching between one behavior task and another. The behavior tasks include phonemic fluency, semantic fluency, category semantic fluency and reading fluency. This method uses verbal generation skills, activating different Broca\u27s areas of the Brodmann\u27s areas (BA44 and BA45). Advanced signal processing techniques are used in order to determine the activated frequency bands in the granger causality for verbal fluency tasks. The graph learning technique for channel strength is used to characterize the complex graph of Granger causality. Also, the support vector machine (SVM) method is used for training a classifier between two subjects with PD and two healthy controls. Neural data from the study was recorded at the Colorado Neurological Institute (CNI). The study reveals significant difference between PD subjects and healthy controls in terms of brain connectivities in the Broca\u27s Area BA44 and BA45 corresponding to EEG electrodes. The results in this thesis also demonstrate the possibility to classify based on the flow of information and causality in the brain of verbal fluency tasks. These methods have the potential to be applied in the future to identify pathological information flow and causality of neurological diseases

    Distribution Level Building Load Prediction Using Deep Learning

    Get PDF
    Load prediction in distribution grids is an important means to improve energy supply scheduling, reduce the production cost, and support emission reduction. Determining accurate load predictions has become more crucial than ever as electrical load patterns are becoming increasingly complicated due to the versatility of the load profiles, the heterogeneity of individual load consumptions, and the variability of consumer-owned energy resources. However, despite the increase of smart grids technologies and energy conservation research, many challenges remain for accurate load prediction using existing methods. This dissertation investigates how to improve the accuracy of load predictions at the distribution level using artificial intelligence (AI), and in particular deep learning (DL), which have already shown significant progress in various other disciplines. Existing research that applies the DL for load predictions has shown improved performance compared to traditional models. The current research using conventional DL tends to be modeled based on the developer\u27s knowledge. However, there is little evidence that researchers have yet addressed the issue of optimizing the DL parameters using evolutionary computations to find more accurate predictions. Additionally, there are still questions about hybridizing different DL methods, conducting parallel computation techniques, and investigating them on complex smart buildings. In addition, there are still questions about disaggregating the net metered load data into load and behind-the-meter generation associated with solar and electric vehicles (EV). The focus of this dissertation is to improve the distribution level load predictions using DL. Five approaches are investigated in this dissertation to find more accurate load predictions. The first approach investigates the prediction performance of different DL methods applied for energy consumption in buildings using univariate time series datasets, where their numerical results show the effectiveness of recursive artificial neural networks (RNN). The second approach studies optimizing time window lags and network\u27s hidden neurons of an RNN method, which is the Long Short-Term Memory, using the Genetic Algorithms, to find more accurate energy consumption forecasting in buildings using univariate time series datasets. The third approach considers multivariate time series and operational parameters of practical data to train a hybrid DL model. The fourth approach investigates parallel computing and big data analysis of different practical buildings at the DU campus to improve energy forecasting accuracies. Lastly, a hybrid DL model is used to disaggregate residential building load and behind-the-meter energy loads, including solar and EV

    A Demand-Supply Matching-Based Approach for Mapping Renewable Resources towards 100% Renewable Grids in 2050

    Get PDF
    Recently, many renewable energy (RE) initiatives around the world are based on general frameworks that accommodate the regional assessment taking into account the mismatch of supply and demand with pre-set goals to reduce energy costs and harmful emissions. Hence, relying entirely on individual assessment and RE deployment scenarios may not be effective. Instead, developing a multi-faceted RE assessment framework is vital to achieving these goals. In this study, a regional RE assessment approach is presented taking into account the mismatch of supply and demand with an emphasis on Photovoltaic (PV) and wind turbine systems. The study incorporates mapping of renewable resources optimized capacities for different configurations of PV and wind systems for multiple sites via test case. This approach not only optimizes system size but also provides the appropriate size at which the maximum renewable energy fraction in the regional power generation mix is maximized while reducing energy costs using MATLAB’s ParetoSearch algorithm. The performance of the proposed approach is tested in a realistic test site, and the results demonstrate the potential for maximizing the RE share compared to the achievable previously reported fractions. The results indicate the importance of resource mapping based on energy-demand matching rather than a quantitative assessment of anchorage sites. In the examined case study, the new assessment approach led to the identification of the best location for installing a hybrid PV / wind system with a storage system capable of achieving a nearly 100% autonomous RE system with Levelized cost of electricity of 0.05 USD/kWh

    Towards Increasing Hosting Capacity of Modern Power Systems through Generation and Transmission Expansion Planning

    No full text
    The use of renewable and sustainable energy sources (RSESs) has become urgent to counter the growing electricity demand and reduce carbon dioxide emissions. However, the current studies are still lacking to introduce a planning model that measures to what extent the networks can host RSESs in the planning phase. In this paper, a stochastic power system planning model is proposed to increase the hosting capacity (HC) of networks and satisfy future load demands. In this regard, the model is formulated to consider a larger number and size of generation and transmission expansion projects installed than the investment costs, without violating operating and reliability constraints. A load forecasting technique, built on an adaptive neural fuzzy system, was employed and incorporated with the planning model to predict the annual load growth. The problem was revealed as a non-linear large-scale optimization problem, and a hybrid of two meta-heuristic algorithms, namely, the weighted mean of vectors optimization technique and sine cosine algorithm, was investigated to solve it. A benchmark system and a realistic network were used to verify the proposed strategy. The results demonstrated the effectiveness of the proposed model to enhance the HC. Besides this, the results proved the efficiency of the hybrid optimizer for solving the problem

    Hybrid Deep Learning Applied on Saudi Smart Grids for Short-Term Load Forecasting

    No full text
    Despite advancements in smart grid (SG) technology, effective load forecasting utilizing big data or large-scale datasets remains a complex task for energy management, planning, and control. The Saudi SGs, in alignment with the Saudi Vision 2030, have been envisioned as future electrical grids with a bidirectional flow of power and data. To that end, data analysis and predictive models can enhance Saudi SG planning and control via artificial intelligence (AI). Recently, many AI methods including deep learning (DL) algorithms for SG applications have been published in the literature and have shown superior time series predictions compared with conventional prediction models. Current load-prediction research for the Saudi grid focuses on identifying anticipated loads and consumptions, on utilizing limited historical data and the behavior of the load’s consumption, and on conducting shallow forecasting models. However, little scientific proof on complex DL models or real-life application has been conducted by researchers; few articles have studied sophisticated large-scale prediction models for Saudi grids. This paper proposes hybrid DL methods to enhance the outcomes in Saudi SG load forecasting, to improve problem-relevant features, and to accurately predict complicated power consumption, with the goal of developing reliable forecasting models and of obtaining knowledge of the relationships between the various features and attributes in the Saudi SGs. The model in this paper utilizes a real dataset from the Jeddah and Medinah grids in Saudi Arabia for a full year, 2021, with a one-hour time resolution. A benchmark strategy using different conventional DL methods including artificial neural network, recurrent neural network (RNN), conventional neural networks (CNN), long short-term memory (LSTM), gated recurrent unit (GRU), and different real datasets is used to verify the proposed models. The prediction results demonstrate the effectiveness of the proposed hybrid DL models, with CNN–GRU and CNN–RNN with NRMSE obtaining 1.4673% and 1.222% improvements, respectively, in load forecasting accuracy

    Hybrid Deep Learning Applied on Saudi Smart Grids for Short-Term Load Forecasting

    No full text
    Despite advancements in smart grid (SG) technology, effective load forecasting utilizing big data or large-scale datasets remains a complex task for energy management, planning, and control. The Saudi SGs, in alignment with the Saudi Vision 2030, have been envisioned as future electrical grids with a bidirectional flow of power and data. To that end, data analysis and predictive models can enhance Saudi SG planning and control via artificial intelligence (AI). Recently, many AI methods including deep learning (DL) algorithms for SG applications have been published in the literature and have shown superior time series predictions compared with conventional prediction models. Current load-prediction research for the Saudi grid focuses on identifying anticipated loads and consumptions, on utilizing limited historical data and the behavior of the load’s consumption, and on conducting shallow forecasting models. However, little scientific proof on complex DL models or real-life application has been conducted by researchers; few articles have studied sophisticated large-scale prediction models for Saudi grids. This paper proposes hybrid DL methods to enhance the outcomes in Saudi SG load forecasting, to improve problem-relevant features, and to accurately predict complicated power consumption, with the goal of developing reliable forecasting models and of obtaining knowledge of the relationships between the various features and attributes in the Saudi SGs. The model in this paper utilizes a real dataset from the Jeddah and Medinah grids in Saudi Arabia for a full year, 2021, with a one-hour time resolution. A benchmark strategy using different conventional DL methods including artificial neural network, recurrent neural network (RNN), conventional neural networks (CNN), long short-term memory (LSTM), gated recurrent unit (GRU), and different real datasets is used to verify the proposed models. The prediction results demonstrate the effectiveness of the proposed hybrid DL models, with CNN–GRU and CNN–RNN with NRMSE obtaining 1.4673% and 1.222% improvements, respectively, in load forecasting accuracy

    Deep Machine Learning Model-Based Cyber-Attacks Detection in Smart Power Systems

    No full text
    Modern intelligent energy grids enable energy supply and consumption to be efficiently managed while simultaneously avoiding a variety of security risks. System disturbances can be caused by both naturally occurring and human-made events. Operators should be aware of the different kinds and causes of disturbances in the energy systems to make informed decisions and respond accordingly. This study addresses this problem by proposing an attack detection model on the basis of deep learning for energy systems, which could be trained utilizing data and logs gathered through phasor measurement units (PMUs). Property or specification making is used to create features, and data are sent to various machine learning methods, of which random forest has been selected as the basic classifier of AdaBoost. Open-source simulated energy system data are used to test the model containing 37 energy system event case studies. In the end, the suggested model has been compared with other layouts according to various assessment metrics. The simulation outcomes showed that this model achieves a detection rate of 93.6% and an accuracy rate of 93.91%, which is greater compared to the existing methods

    An Adoptive Miner-Misuse Based Online Anomaly Detection Approach in the Power System: An Optimum Reinforcement Learning Method

    No full text
    Over the past few years, the Bitcoin-based financial trading system (BFTS) has created new challenges for the power system due to the high-risk consumption of mining devices. Briefly, such a problem would be a compelling incentive for cyber-attackers who intend to inflict significant infections on a power system. Simply put, an effort to phony up the consumption data of mining devices results in the furtherance of messing up the optimal energy management within the power system. Hence, this paper introduces a new cyber-attack named miner-misuse for power systems equipped by transaction tech. To overwhelm this dispute, this article also addresses an online coefficient anomaly detection approach with reliance on the reinforcement learning (RL) concept for the power system. On account of not being sufficiently aware of the system, we fulfilled the Observable Markov Decision Process (OMDP) idea in the RL mechanism in order to barricade the miner attack. The proposed method would be enhanced in an optimal and punctual way if the setting parameters were properly established in the learning procedure. So to speak, a hybrid mechanism of the optimization approach and learning structure will not only guarantee catching in the best and most far-sighted solution but also become the high converging time. To this end, this paper proposes an Intelligent Priority Selection (IPS) algorithm merging with the suggested RL method to become more punctual and optimum in the way of detecting miner attacks. Additionally, to conjure up the proposed detection approach’s effectiveness, mathematical modeling of the energy consumption of the mining devices based on the hashing rate within BFTS is provided. The uncertain fluctuation related to the needed energy of miners makes energy management unpredictable and needs to be dealt with. Hence, the unscented transformation (UT) method can obtain a high chance of precisely modeling the uncertain parameters within the system. All in all, the F-score value and successful probability of attack inferred from results revealed that the proposed anomaly detection method has the ability to identify the miner attacks as real-time-short as possible compared to other approaches

    Optimization of a Multilevel Inverter Design Used for Photovoltaic Systems under Variable Switching Controllers

    No full text
    Among multilevel inverters (MLIs), two-level inverters are the most common. However, this inverter type cannot maintain total harmonic distortion (THD) due to its limited number of levels. Reductions in THD are inversely proportional to the number of levels where increased output occurs in diverse ways, and the use of fewer components with low harmonic distortion is necessary for such reductions. This work proposes a seven-level (7L) MLI design with a small number of components and low harmonic distortion. The proposed MLI is combined with switched capacitor (SC) cells to promote output levels and at the same time to boost the input voltage. The connections between the capacitor and the source are based on the series to parallel topology, where the charging and discharging of the SC cells are caused by fluctuations in their connection. The output of the SC cell is combined with an H-bridge inverter controlled by a proposed PWM controller. The simulation result of the SC 7L inverter was completed using LTspice software. A comparison of the proposed topology with that of other current MLI led to better validation results. The proposed design shows a reduction in the THD with fewer components. The cost and size of the proposed inverter is minimal due to the smaller number of components. Ohmic load and inductive ohmic load were used as loads for the system
    corecore