3 research outputs found

    Risk-Based Machine Learning Approaches for Probabilistic Transient Stability

    Get PDF
    Power systems are getting more complex than ever and are consequently operating close to their limit of stability. Moreover, with the increasing demand of renewable wind generation, and the requirement to maintain a secure power system, the importance of transient stability cannot be overestimated. Considering its significance in power system security, it is important to propose a different approach for enhancing the transient stability, considering uncertainties. Current deterministic industry practices of transient stability assessment ignore the probabilistic nature of variables (fault type, fault location, fault clearing time, etc.). These approaches typically provide a conservative criterion and can result in expensive expansion plans or conservative operating limits. With the increasing system uncertainties and widespread electricity market deregulation, there is a strong inevitability to incorporate probabilistic transient stability (PTS) analysis. Moreover, the time-domain simulation approach, for transient stability evaluation, involving differential-algebraic equations, can be very computationally intensive, especially for a large-scale system, and for online dynamic security assessment (DSA). The impact of wind penetration on transient stability is critical to investigate, as it does not possess the inherent inertia of synchronous generators. Thus, this research proposes risk-based, machine learning (ML) approaches, for PTS enhancement by replacing circuit breakers, including the impact of wind generation. Artificial Neural Network (ANN) was used for predicting the benefit-cost ratio (BCR) to reduce the computation effort. Moreover, both ANN and support vector machine (SVM) were used and consequently, were compared, for PTS classification, for online DSA. The training of the ANN and SVM was accomplished using suitable system features as inputs, and PTS status indicator as the output. DIgSILENT PowerFactory and MATLAB was utilized for transient stability simulations (for obtaining training data for ML algorithms), and applying ML algorithms, respectively. Results obtained for the IEEE 14-bus test system demonstrated that the proposed ML methods offer a fast approach for PTS prediction with a fairly high accuracy, and thereby, signifying a strong possibility for ML application in probabilistic DSA. Advisor: Sohrab Asgarpoo

    Machine Learning for Beamforming in Audio, Ultrasound, and Radar

    Get PDF
    Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of audio, ultrasound, and radar. Machine learning is the other central part of this thesis. Machine learning, and especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more. In this dissertation, we look at beamforming pipelines in audio, ultrasound, and radar from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. We start off in the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zooming. Staying in the audio domain, we then demonstrate how deep learning can be used to improve the perceptual qualities of speech by denoising speech clipping, codec distortions, and gaps in speech. Transitioning to the ultrasound domain, we improve the performance of short-lag spatial coherence ultrasound imaging by exploiting the differences in tissue texture at each short lag value by applying robust principal component analysis. Next, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data. Finally, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar
    corecore