2,207 research outputs found

    Multiple decomposition-aided long short-term memory network for enhanced short-term wind power forecasting.

    Get PDF
    With the increasing penetration of grid-scale wind energy systems, accurate wind power forecasting is critical to optimizing their integration into the power system, ensuring operational reliability, and enabling efficient system asset utilization. Addressing this challenge, this study proposes a novel forecasting model that combines the long-short-term memory (LSTM) neural network with two signal decomposition techniques. The EMD technique effectively extracts stable, stationary, and regular patterns from the original wind power signal, while the VMD technique tackles the most challenging high-frequency component. A deep learning-based forecasting model, i.e. the LSTM neural network, is used to take advantage of its ability to learn from longer sequences of data and its robustness to noise and outliers. The developed model is evaluated against LSTM models employing various decomposition methods using real wind power data from three distinct offshore wind farms. It is shown that the two-stage decomposition significantly enhances forecasting accuracy, with the proposed model achieving R2 values up to 9.5% higher than those obtained using standard LSTM models

    Hybrid Advanced Optimization Methods with Evolutionary Computation Techniques in Energy Forecasting

    Get PDF
    More accurate and precise energy demand forecasts are required when energy decisions are made in a competitive environment. Particularly in the Big Data era, forecasting models are always based on a complex function combination, and energy data are always complicated. Examples include seasonality, cyclicity, fluctuation, dynamic nonlinearity, and so on. These forecasting models have resulted in an over-reliance on the use of informal judgment and higher expenses when lacking the ability to determine data characteristics and patterns. The hybridization of optimization methods and superior evolutionary algorithms can provide important improvements via good parameter determinations in the optimization process, which is of great assistance to actions taken by energy decision-makers. This book aimed to attract researchers with an interest in the research areas described above. Specifically, it sought contributions to the development of any hybrid optimization methods (e.g., quadratic programming techniques, chaotic mapping, fuzzy inference theory, quantum computing, etc.) with advanced algorithms (e.g., genetic algorithms, ant colony optimization, particle swarm optimization algorithm, etc.) that have superior capabilities over the traditional optimization approaches to overcome some embedded drawbacks, and the application of these advanced hybrid approaches to significantly improve forecasting accuracy

    Big Data Analysis application in the renewable energy market: wind power

    Get PDF
    Entre as enerxías renovables, a enerxía eólica e unha das tecnoloxías mundiais de rápido crecemento. Non obstante, esta incerteza debería minimizarse para programar e xestionar mellor os activos de xeración tradicionais para compensar a falta de electricidade nas redes electricas. A aparición de técnicas baseadas en datos ou aprendizaxe automática deu a capacidade de proporcionar predicións espaciais e temporais de alta resolución da velocidade e potencia do vento. Neste traballo desenvólvense tres modelos diferentes de ANN, abordando tres grandes problemas na predición de series de datos con esta técnica: garantía de calidade de datos e imputación de datos non válidos, asignación de hiperparámetros e selección de funcións. Os modelos desenvolvidos baséanse en técnicas de agrupación, optimización e procesamento de sinais para proporcionar predicións de velocidade e potencia do vento a curto e medio prazo (de minutos a horas)

    Noise eliminated ensemble empirical mode decomposition scalogram analysis for rotating machinery fault diagnosis

    Get PDF
    Rotating machinery is one type of major industrial component that suffers from various faults and damage due to the constant workload to which it is subjected. Therefore, a fast and reliable fault diagnosis method is essential for machine condition monitoring. Artificial intelligence can be applied in fault feature extraction and classification. It is crucial to use an effective feature extraction method to obtain most of the fault information and a robust classifier to classify those features. In this study, an improved method, noise-eliminated ensemble empirical mode decomposition (NEEEMD), was proposed to reduce the white noise in the intrinsic functions and retain the optimum ensembles. A convolution neural network (CNN) classifier was applied for classification because of its feature-learning ability. A generalised CNN architecture was proposed to reduce the model training time. The classifier input used was 64×64 pixel RGB scalogram samples. However, CNN requires a large amount of training data to achieve high accuracy and robustness. Deep convolution generative adversarial network (DCGAN) was applied for data augmentation during the training phase. To evaluate the effectiveness of the proposed feature extraction method, scalograms from the related feature extraction methods such as ensemble empirical mode decomposition (EEMD), complementary EEMD (CEEMD) and continuous wavelet transform (CWT) were also classified. The effectiveness of the scalograms was also validated by comparing the classifier performance using greyscale samples from the raw vibration signals. The ability of CNN was compared with two traditional machine learning algorithms, k nearest neighbour (kNN) and the support vector machine (SVM), using statistical features from EEMD, CEEMD and NEEEMD. The proposed method was validated using bearing and blade datasets. The results show that the machine learning algorithms achieved comparatively lower accuracy than the proposed CNN model. All the outputs from the bearing and blade fault classifiers demonstrated that the scalogram samples from the proposed NEEEMD method obtained the highest accuracy, sensitivity and robustness using CNN. DCGAN was applied with the proposed NEEEMD scalograms to enhance the CNN classifier’s performance further and identify the optimal amount of training data. After training the classifier using the augmented samples, the results showed that the classifier obtained even higher validation and test accuracy with greater robustness. The test accuracies improved from 98%, 96.31% and 92.25% to 99.6%, 98.29% and 93.59%, respectively, for the different classifier models using NEEEMD. The proposed method can be used as a more generalised and robust method for rotating machinery fault diagnosis

    Large Scale Kernel Methods for Fun and Profit

    Get PDF
    Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models
    corecore