4 research outputs found

    Adaptabilities of three mainstream short-term wind power forecasting methods

    Get PDF
    Variability and intermittency of wind is the main challenge for making a reliable wind power forecasting (WPF). Meteorological and topological complexities make it even harder to fit any forecasting algorithm to one particular case. This paper presents the comparison of three short term WPF models based on three wind farms in China with different terrains and climates. The sensitivity effects of training samples on forecasting performance are investigated in terms of sample size, sample quality, and sample time scale. Then, their adaptabilities and modeling efficiency are also discussed under different seasonal and topographic conditions. Results show that (1) radial basis function (RBF) and support vector machine (SVM) generally have higher prediction accuracy than that of genetic algorithm back propagation (GA-BP), but different models show advantages in different seasons and terrains. (2) WPF taking a month as the training time interval can increase the accuracy of short-term WPF. (3) The change of sample number for the GA-BP and RBF is less sensitive than that of the SVM. (4) GA-BP forecasting accuracy is equally sensitive to all size of training samples. RBF and SVM have different sensibility to different size of training samples. This study can quantitatively provide reference for choosing the appropriate WPF model and further optimization for specific engineering cases, based on better understanding of algorithm theory and its adaptability. In this way, WPF users can select the suitable algorithm for different terrains and climates to achieve reliable prediction for market clearing, efficient pricing, dispatching, etc.</p

    Machine Learning for the New York City Power Grid

    Get PDF
    Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce (1) feeder failure rankings, (2) cable, joint, terminator, and transformer rankings, (3) feeder Mean Time Between Failure (MTBF) estimates, and (4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or real-time, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City's electrical grid
    corecore