87 research outputs found

    A data-driven approach using deep learning time series prediction for forecasting power system variables

    Get PDF
    This study investigates the performance of ‘Group Method of Data Handling’ type neural network algorithm in short-term time series prediction of the renewable energy and grid-balancing variables, such as the Net Regulation Volume (NRV) and System Imbalance (SI). The proposed method is compared with a Multi-layer Perceptron (MLP) neural network which is known as a universal approximator. Empirical validation results show that the GMDH performance is more accurate in compression with the most recent forecast which is provided by ELIA (Belgian transmission system operator). This study aims to practice the applicability of the polynomial GMDH-type neural network algorithm in time series prediction under a wide range of complexity and uncertainty related to the environment and electricity market

    Short-term power demand forecasting using the differential polynomial neural network

    Get PDF
    Power demand forecasting is important for economically efficient operation and effective control of power systems and enables to plan the load of generating unit. The purpose of the short-term electricity demand forecasting is to forecast in advance the system load, represented by the sum of all consumers load at the same time. A precise load forecasting is required to avoid high generation cost and the spinning reserve capacity. Under-prediction of the demands leads to an insufficient reserve capacity preparation and can threaten the system stability, on the other hand, over-prediction leads to an unnecessarily large reserve that leads to a high cost preparations. Differential polynomial neural network is a new neural network type, which forms and resolves an unknown general partial differential equation of an approximation of a searched function, described by data observations. It generates convergent sum series of relative polynomial derivative terms which can substitute for the ordinary differential equation, describing 1-parametric function time-series. A new method of the short-term power demand forecasting, based on similarity relations of several subsequent day progress cycles at the same time points is presented and tested on 2 datasets. Comparisons were done with the artificial neural network using the same prediction method.Web of Science8230629

    A Hybrid Autoregressive Integrated Moving Average-phGMDH Model to Forecast Crude Oil Price

    Get PDF
    Crude oil price fluctuations affect almost every individual and activity on the planet. Forecasting the crude oil price is therefore an important concern especially in economic policy and financial circles as it enables stakeholders estimate crude oil price at a point in time. Autoregressive Integrated Moving Average has been an effective tool that has been used widely to model time series. Its limitation is the fact that it cannot model nonlinear systems sufficiently. This paper assesses the ability to build a robust forecasting model for the world crude oil price, Brent on the international market using a hybrid of two methods Autoregressive Integrated Moving Average and Polynomial Harmonic Group Method of Data Handling. Autoregressive Integrated Moving Average methodology is used to model the time series component with constant variance whilst the Polynomial Harmonic Group Method of Data Handling is used to model the harmonic Autoregressive Integrated Moving Average model residuals. Keywords: Autocorrelation, Harmonics, Residuals JEL Classifications: C18, C45, C51, C63, C87, O13 DOI: https://doi.org/10.32479/ijeep.798

    Combining group method of data handling models using artificial bee colony algorithm for time series forecasting

    Get PDF
    Time series forecasting which uses models to predict future values based on some historical data is an important area of forecasting, and has gained the attention of researchers from various related fields of study. In line with its popularity, various models have been introduced for producing accurate time series forecasts. However, to produce an accurate forecast is not an easy feat especially when dealing with nonlinear data due to the abstract nature of the data. In this study, a model for accurate time series forecasting based on Artificial Bee Colony (ABC) algorithm and Group Method of Data Handling (GMDH) models with variant transfer functions, namely polynomial, sigmoid, radial basis function and tangent was developed. Initially, in this research, the GMDH models were used to forecast the time series data followed by each forecast that was combined using ABC. Then, the ABC produced the weight for each forecast before aggregating the forecasts. To evaluate the performance of the developed GMDH-ABC model, input data on tourism arrivals (Singapore and Indonesia) and airline passengers’ data were processed using the model to produce reliable forecast on the time series data. To validate the evaluation, the performance of the model was compared against benchmark models such as the individual GMDH models, Artificial Neural Network (ANN) model and combined GMDH using simple averaging (GMDH-SA) model. Experimental results showed that the GMDH-ABC model had the highest accuracy compared to the other models, where it managed to reduce the Root Mean Square Error (RMSE) of the conventional GMDH model by 15.78% for Singapore data, 28.2% for Indonesia data and 30.89% for airline data. As a conclusion, these results demonstrated the reliability of the GMDH-ABC model in time series forecasting, and its superiority when compared to the other existing models

    MODELLING VISCOSITY BELOW BUBBLE POINT PRESSURE USING GROUP METHOD OF DATA HANDLING (GMDH): A COMPARATIVE STUDY

    Get PDF
    Below the bubble point pressure, the amount of gas dissolved in the oil increases as the pressure is increased. This causes the in-situ oil viscosity to decrease significantly. Knowledge of viscosity below bubble point is essential to many areas in the petroleum industry including reservoir and fluid production and recovery, and upgrading and transporting produced fluids. However, prediction of this parameter is difficult below bubble point pressure as the liquid undergoes a significant change in composition. These crude oils exhibit regional trends in chemical composition that categorize them as paraffinic, naphthenic, or aromatic. Because of the differences in composition, correlations developed from regional samples that are predominantly of one chemical base may not provide satisfactory results when applied to crude oils from other regions. Although some correlations show modest tolerance to assist prediction in other regions, getting accurate results with acceptable value of errors remains questionable. The application of GMDH is not only restricted in reservoir engineering. It is critical in many areas which include accounting and auditing, finance, marketing, organizational behaviour, economics, military systems and medicine. They have several advantages compared with conventional neural networks. It has the ability to automatically organize multilayered neural networks by using the heuristic self organization method. In the GMDH-type neural networks, many types of neurons, which are polynomial type, sigmoid function type, and radial basis function type can be used to organize neural network architectures and optimum neuron architectures are selected so as to fit the complexity of the nonlinear system. The recent advancement in Soft Computing (SC) called Group Method of Data Handling (GMDH) type of Neural Networks will be able to provide a more intelligent platform for predicting viscosity below bubble point pressure with an outstanding correlation coefficient. This paper seeks to develop a new viscosity correlation below bubble point pressure using data points taken from international oil fields. The correlation will be mapped against other existing correlations from the literature using trend analysis to verify its performance. A theoretical justification of the developed correlation will be presented. The correlation is expected to be valid for all types of crude oils within the range of data used in the study. A series of statistical and graphical analysis relative to existing correlations will be initiated once the correlation has been formulated to provide a numerical insight on its accuracy. The comparison will validate the reliability and relevance of the proposed model to predict the viscosity below bubble point pressure

    Novel Evolutionary-based Methods for the Robust Training of SVR and GMDH Regressors

    Get PDF
    En los últimos años se han consolidado una serie de diferentes métodos y algoritmos para problemas de aprendizaje máquina y optimización de sistemas, que han dado lugar a toda una corriente de investigación conocida como Soft-Computing. El término de Soft-Computing hace referencia a una colección de técnicas computacionales que intenta estudiar, modelar y analizar fenómenos muy complejos, para los que los métodos convencionales no proporcionan soluciones completas, o no las proporcionan en un tiempo razonable. Dentro de lo que se considera como Soft-Computing existen una gran cantidad de técnicas tales como Redes Neuronales, Máquinas de Vectores Soporte (SVM), Redes Bayesianas, Computación Evolutiva (Algoritmos Genéticos, Algoritmos Evolutivos etc), etc. La investigación de la Tesis está enfocada en dos de estas técnicas, en primer lugar las máquinas de vectores soporte de regresión (SVR) y en segundo lugar a las GMDH (Group Method of Data Handling). Las SVM son una técnica ideada por Vapnik, basada en el principio de minimización del riesgo estructural y la teoría de los métodos kernel, que a partir de un conjunto de datos construye una regla de decisión con la cual intentar predecir nuevos valores para dicho proceso a partir de nuevas entradas. La eficiencia de los sistemas SVM ha hecho que tengan un desarrollo muy significativo en los últimos años y se hayan utilizado en una gran cantidad de aplicaciones tanto para clasificación como para problemas de regresión (SVR). Uno de los principales problemas es la búsqueda de los que se conoce como hiper-parámetros. Estos parámetros no pueden ser calculados de forma exacta, por lo que se hace necesario testear un gran número de combinaciones, para obtener unos parámetros que generen una buena función de estimación. Debido a esto el tiempo de entrenamiento suele ser elevado y no siempre los parámetros encontrados generan una buena solución: ya sea porque el algoritmo de búsqueda tenga un pobre rendimiento o porque el modelo generado está sobre-entrenado. En esta Tesis se ha desarrollado un nuevo algoritmo de tipo evolutivo para el entrenamiento con kernel multi-paramétrico. Este nuevo algoritmo tiene en cuenta un parámetro distinto, para cada una de las dimensiones del espacio de entradas. En este caso, debido al incremento del número de parámetros no puede utilizarse una búsqueda en grid clásica, debido al coste computacional que conllevaría. Por ello, en esta Tesis se propone la utilización de un algoritmo evolutivo para la obtención de los valores óptimos de los parámetros de la SVR y la aplicación de nuevas cotas para los parámetros de este kernel multi-paramétrico. Junto con esto, se han desarrollado nuevos métodos de validación que mejoren el rendimiento de las técnicas de regresión en problemas data-driven. La idea es obtener mejores modelos en la fase de entrenamiento del algoritmo, de tal forma que el desempeño con el conjunto de test mejore, principalmente en lo que a tiempo de entrenamiento se refiere y en el rendimiento general del sistema, con respecto a otros métodos de validación clásicos como son K-Fold cross-validation, etc. El otro foco de investigación de esta Tesis se encuentra en la técnica GMDH, ideada en los años 70 por Ivakhnenko. Es un método particularmente útil para problemas que requieran bajos tiempos de entrenamiento. Es un algoritmo auto-organizado, donde el modelo se genera de forma adaptativa a partir de los datos, creciendo con el tiempo en complejidad y ajustándose al problema en cuestión, hasta que el modelo alcanza un grado de complejidad óptima, es decir, no es demasiado simple ni demasiado complejo. De esta forma el algoritmo construye el modelo en base a los datos de los que dispone y no a una idea preconcebida del investigador, como ocurre en la mayoría de las técnicas de Soft-Computing. Las GMDH también tienen algunos inconvenientes como son los errores debido al sobre-entrenamiento y la multicolinealidad, esto hace que en algunas ocasiones el error sea elevado si lo comparamos con otras técnicas. Esta Tesis propone un nuevo algoritmo de construcción de estas redes basado en un algoritmo de tipo hiper-heurístico. Esta aproximación es un concepto nuevo relacionado con la computación evolutiva, que codifica varios heurísticos que pueden ser utilizados de forma secuencial para resolver un problema de optimización. En nuestro caso particular, varios heurísticos básicos se codifican en un algoritmo evolutivo, para crear una solución hiper-heurística que permita construir redes GMDH robustas en problemas de regresión. Todas las propuestas y métodos desarrollados en esta Tesis han sido evaluados experimentalmente en problemas benchmark, así como en aplicaciones de regresión reales

    Formation and Development of Self-Organizing Intelligent Technologies of Inductive Modeling

    No full text
    The purpose of this paper is analysing the background of the GMDH invention by Ivakhnenko and the evolution of model self-organization ideas, methods and tools during the halfcentury historical period of successful development of the inductive modeling methodology.Метою дослідження є аналіз передумов винайдення МГУА О.Г. Івахненком та еволюції ідей, методів та інструментів самоорганізації моделей протягом піввікового історичного періоду успішного розвитку методології індуктивного моделювання.Целью работы является анализ эволюции идей, методов и инструментов самоорганизации моделей в течение полувекового исторического периода успешного развития методологии индуктивного моделирования. Проанализированы основные предпосылки создания академиком А.Г. Ивахненко метода группового учета аргументов (МГУА), исследуется эволюция его научных идей и взглядов, а также основные достижения в развитии МГУА в период 1968–1997 годов. Охарактеризован вклад исследователей разных стран в модификацию и применение МГУА. Приведены результаты дальнейших разработок методов и инструментов индуктивного моделирования в отделе Информационных технологий индуктивного моделирования и указаны наиболее перспективные направления исследований в этой области
    corecore