NEURAL NETWORK DISTRIBUTIONAL INITIAL CONDITION ROBUSTNESS IN POWER SYSTEMS

Abstract

How can we measure and classify neural network robustness across differently distributed data to avoid misuse of machine learning tools? This thesis adopts several metrics to measure the initial condition robustness of feedforward neural networks, allowing the creators of such networks to measure and refine their robustness and performance. This could allow highly robust neural networks to be used reliably on untrained data distributions and prevent the use of less robust networks as a black box in a poor environment. We test this measurement of robustness on a series of differently sized neural networks trained to detect and classify microgrid power system faults, giving examples of both robust and nonrobust networks, along with suggestions on how to maximize robustness. The analysis reveals that collecting data from segments along trajectories enhances the robustness of neural networks. In such data sets, the distribution of data points is dominated by the dynamics of the system, not the initial state distribution.Ensign, United States NavyApproved for public release. Distribution is unlimited

    Similar works