483 research outputs found

    Smart Distributed Generation System Event Classification using Recurrent Neural Network-based Long Short-term Memory

    Get PDF
    High penetration of distributed generation (DG) sources into a decentralized power system causes several disturbances, making the monitoring and operation control of the system complicated. Moreover, because of being passive, modern DG systems are unable to detect and inform about these disturbances related to power quality in an intelligent approach. This paper proposed an intelligent and novel technique, capable of making real-time decisions on the occurrence of different DG events such as islanding, capacitor switching, unsymmetrical faults, load switching, and loss of parallel feeder and distinguishing these events from the normal mode of operation. This event classification technique was designed to diagnose the distinctive pattern of the time-domain signal representing a measured electrical parameter, like the voltage, at DG point of common coupling (PCC) during such events. Then different power system events were classified into their root causes using long short-term memory (LSTM), which is a deep learning algorithm for time sequence to label classification. A total of 1100 events showcasing islanding, faults, and other DG events were generated based on the model of a smart distributed generation system using a MATLAB/Simulink environment. Classifier performance was calculated using 5-fold cross-validation. The genetic algorithm (GA) was used to determine the optimum value of classification hyper-parameters and the best combination of features. The simulation results indicated that the events were classified with high precision and specificity with ten cycles of occurrences while achieving a 99.17% validation accuracy. The performance of the proposed classification technique does not degrade with the presence of noise in test data, multiple DG sources in the model, and inclusion of motor starting event in training samples

    34th Midwest Symposium on Circuits and Systems-Final Program

    Get PDF
    Organized by the Naval Postgraduate School Monterey California. Cosponsored by the IEEE Circuits and Systems Society. Symposium Organizing Committee: General Chairman-Sherif Michael, Technical Program-Roberto Cristi, Publications-Michael Soderstrand, Special Sessions- Charles W. Therrien, Publicity: Jeffrey Burl, Finance: Ralph Hippenstiel, and Local Arrangements: Barbara Cristi

    Adaptive weighted least squares algorithm for Volterra signal modeling

    No full text
    Published versio

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202

    Pulse-stream binary stochastic hardware for neural computation the Helmholtz Machine

    Get PDF

    Model Parameter Calibration in Power Systems

    Get PDF
    In power systems, accurate device modeling is crucial for grid reliability, availability, and resiliency. Many critical tasks such as planning or even realtime operation decisions rely on accurate modeling. This research presents an approach for model parameter calibration in power system models using deep learning. Existing calibration methods are based on mathematical approaches that suffer from being ill-posed and thus may have multiple solutions. We are trying to solve this problem by applying a deep learning architecture that is trained to estimate model parameters from simulated Phasor Measurement Unit (PMU) data. The data recorded after system disturbances proved to have valuable information to verify power system devices. A quantitative evaluation of the system results is provided. Results showed high accuracy in estimating model parameters of 0.017 MSE on the testing dataset. We also provide that the proposed system has scalability under the same topology. We consider these promising results to be the basis for further exploration and development of additional tools for parameter calibration
    • …
    corecore