766 research outputs found

    Prediction in Photovoltaic Power by Neural Networks

    Get PDF
    The ability to forecast the power produced by renewable energy plants in the short and middle term is a key issue to allow a high-level penetration of the distributed generation into the grid infrastructure. Forecasting energy production is mandatory for dispatching and distribution issues, at the transmission system operator level, as well as the electrical distributor and power system operator levels. In this paper, we present three techniques based on neural and fuzzy neural networks, namely the radial basis function, the adaptive neuro-fuzzy inference system and the higher-order neuro-fuzzy inference system, which are well suited to predict data sequences stemming from real-world applications. The preliminary results concerning the prediction of the power generated by a large-scale photovoltaic plant in Italy confirm the reliability and accuracy of the proposed approaches

    Essays on variational Bayes in Econometrics

    Get PDF
    The first essay (Chapter 1) presents a Variational Bayes (Vb) algorithm for Vector Autoregression (reduced-form VAR). The algorithm is derived based on the evidence lower bound, which is demonstrated to be tight, ensuring efficient convergence. The optimization is carried through the Coordinate descent optimization. To validate the proposed method, its accuracy and computational costs are compared with existing Vb approaches that approximate VAR using a one equation at a time technique (Choleskytransformed VAR), and a more computationally intensive Markov Chain Monte Carlo (MCMC) method using Gibbs sampling. In applications using both US macroeconomic data and artificial data, our Vb for VAR outperforms Vb in Cholesky-transformed VAR in terms of VAR covariance accuracy. Furthermore, compared to the MCMC method, our proposed Vb algorithm for reduced form VAR achieves comparable accuracy while significantly reducing computation time. The second essay (Chapter 2) takes the Variational Bayes (Vb) approach to the next level by extending it to the challenging domain of Mixed Frequency Vector Autoregression (MF-VAR) models. These models tackle the complexities of dealing with multiple frequency data in a single estimation, including the issue of missing lower frequency observations in a higher frequency system. To overcome these challenges, we introduce a robust and innovative Vb method known as the Variational Bayes-Expectation Maximization algorithm (Vb-EM). Our Vb-EM algorithm offers several key contributions to approximate Bayesian inference in the MF-VAR model. We derive an evidence lower bound to the log marginal likelihood, accounting for missing observations, and optimize it with respect to the variational parameters. In doing so, we surpass existing Vb methods in the literature by achieving a tighter evidence lower bound, ensuring optimal convergence. To further validate our approach, we compare it to the more computationally demanding Markov Chain Monte Carlo (MCMC) method using Gibbs sampling. Through extensive empirical evaluations and out-of-sample forecasts of eleven US macroeconomic series, we demonstrate that our Vb EM algorithm performs on par with MCMC in terms of point forecasts. Furthermore, when assessing predictive density, we find no significant empirical evidence to distinguish between the two methods. Notably, our Vb-EM algorithm offers the distinct advantage of significantly lower computational costs, making it an appealing choice for researchers and practitioners alike. The third essay (Chapter 3) begins by emphasizing that the spike of volatilities of macroeconomic variables during the surge of Covid-19 pandemic, which led to poor performance of the workhorse Bayesian VAR with stochastic volatility in terms of forecasting. This has attracted considerable attention from economists towards alternative models, including non-parametric models such as Gaussian process VAR. The approach to estimate VAR one equation at a time, namely Cholesky-transformed VARs, enables the application of more advanced regression models in VAR. In this chapter I explore several advanced Gaussian process VARs, including GP-VAR, GP-DNN-VAR (which incorporates a deep neural network as the mean function in the GP prior), and Heteroscedastic-GP-VAR (HGP-VAR) where the likelihood variance is assumed to be time-varying and parameterized by another latent-GP function. In this chapter the variational inference is utilized to be the approximating method for HGP-VAR. The forecasting results suggest that during non pandemic periods, HGP-VAR and GP-VAR perform similarly to BVAR-SV. However, during the Covid-19 pandemic, the advantage of having time-variant likelihood variance in HGP-VAR becomes more pronounced for predicting macroeconomic variables in a highly turbulent period.The first essay (Chapter 1) presents a Variational Bayes (Vb) algorithm for Vector Autoregression (reduced-form VAR). The algorithm is derived based on the evidence lower bound, which is demonstrated to be tight, ensuring efficient convergence. The optimization is carried through the Coordinate descent optimization. To validate the proposed method, its accuracy and computational costs are compared with existing Vb approaches that approximate VAR using a one equation at a time technique (Choleskytransformed VAR), and a more computationally intensive Markov Chain Monte Carlo (MCMC) method using Gibbs sampling. In applications using both US macroeconomic data and artificial data, our Vb for VAR outperforms Vb in Cholesky-transformed VAR in terms of VAR covariance accuracy. Furthermore, compared to the MCMC method, our proposed Vb algorithm for reduced form VAR achieves comparable accuracy while significantly reducing computation time. The second essay (Chapter 2) takes the Variational Bayes (Vb) approach to the next level by extending it to the challenging domain of Mixed Frequency Vector Autoregression (MF-VAR) models. These models tackle the complexities of dealing with multiple frequency data in a single estimation, including the issue of missing lower frequency observations in a higher frequency system. To overcome these challenges, we introduce a robust and innovative Vb method known as the Variational Bayes-Expectation Maximization algorithm (Vb-EM). Our Vb-EM algorithm offers several key contributions to approximate Bayesian inference in the MF-VAR model. We derive an evidence lower bound to the log marginal likelihood, accounting for missing observations, and optimize it with respect to the variational parameters. In doing so, we surpass existing Vb methods in the literature by achieving a tighter evidence lower bound, ensuring optimal convergence. To further validate our approach, we compare it to the more computationally demanding Markov Chain Monte Carlo (MCMC) method using Gibbs sampling. Through extensive empirical evaluations and out-of-sample forecasts of eleven US macroeconomic series, we demonstrate that our Vb EM algorithm performs on par with MCMC in terms of point forecasts. Furthermore, when assessing predictive density, we find no significant empirical evidence to distinguish between the two methods. Notably, our Vb-EM algorithm offers the distinct advantage of significantly lower computational costs, making it an appealing choice for researchers and practitioners alike. The third essay (Chapter 3) begins by emphasizing that the spike of volatilities of macroeconomic variables during the surge of Covid-19 pandemic, which led to poor performance of the workhorse Bayesian VAR with stochastic volatility in terms of forecasting. This has attracted considerable attention from economists towards alternative models, including non-parametric models such as Gaussian process VAR. The approach to estimate VAR one equation at a time, namely Cholesky-transformed VARs, enables the application of more advanced regression models in VAR. In this chapter I explore several advanced Gaussian process VARs, including GP-VAR, GP-DNN-VAR (which incorporates a deep neural network as the mean function in the GP prior), and Heteroscedastic-GP-VAR (HGP-VAR) where the likelihood variance is assumed to be time-varying and parameterized by another latent-GP function. In this chapter the variational inference is utilized to be the approximating method for HGP-VAR. The forecasting results suggest that during non pandemic periods, HGP-VAR and GP-VAR perform similarly to BVAR-SV. However, during the Covid-19 pandemic, the advantage of having time-variant likelihood variance in HGP-VAR becomes more pronounced for predicting macroeconomic variables in a highly turbulent period

    Air Quality Prediction in Smart Cities Using Machine Learning Technologies Based on Sensor Data: A Review

    Get PDF
    The influence of machine learning technologies is rapidly increasing and penetrating almost in every field, and air pollution prediction is not being excluded from those fields. This paper covers the revision of the studies related to air pollution prediction using machine learning algorithms based on sensor data in the context of smart cities. Using the most popular databases and executing the corresponding filtration, the most relevant papers were selected. After thorough reviewing those papers, the main features were extracted, which served as a base to link and compare them to each other. As a result, we can conclude that: (1) instead of using simple machine learning techniques, currently, the authors apply advanced and sophisticated techniques, (2) China was the leading country in terms of a case study, (3) Particulate matter with diameter equal to 2.5 micrometers was the main prediction target, (4) in 41% of the publications the authors carried out the prediction for the next day, (5) 66% of the studies used data had an hourly rate, (6) 49% of the papers used open data and since 2016 it had a tendency to increase, and (7) for efficient air quality prediction it is important to consider the external factors such as weather conditions, spatial characteristics, and temporal features

    A Survey on Data Mining Techniques Applied to Energy Time Series Forecasting

    Get PDF
    Data mining has become an essential tool during the last decade to analyze large sets of data. The variety of techniques it includes and the successful results obtained in many application fields, make this family of approaches powerful and widely used. In particular, this work explores the application of these techniques to time series forecasting. Although classical statistical-based methods provides reasonably good results, the result of the application of data mining outperforms those of classical ones. Hence, this work faces two main challenges: (i) to provide a compact mathematical formulation of the mainly used techniques; (ii) to review the latest works of time series forecasting and, as case study, those related to electricity price and demand markets.Ministerio de Economía y Competitividad TIN2014-55894-C2-RJunta de Andalucía P12- TIC-1728Universidad Pablo de Olavide APPB81309

    Short-Term Industrial Load Forecasting Based on Ensemble Hidden Markov Model

    Get PDF
    Short-term load forecasting (STLF) for industrial customers has been an essential task to reduce the cost of energy transaction and promote the stable operation of smart grid throughout the development of the modern power system. Traditional STLF methods commonly focus on establishing the non-linear relationship between loads and features, but ignore the temporal relationship between them. In this paper, an STLF method based on ensemble hidden Markov model (e-HMM) is proposed to track and learn the dynamic characteristics of industrial customer’s consumption patterns in correlated multivariate time series, thereby improving the prediction accuracy. Specifically, a novel similarity measurement strategy of log-likelihood space is designed to calculate the log-likelihood value of the multivariate time series in sliding time windows, which can effectively help the hidden Markov model (HMM) to capture the dynamic temporal characteristics from multiple historical sequences in similar patterns, so that the prediction accuracy is greatly improved. In order to improve the generalization ability and stability of a single HMM, we further adopt the framework of Bagging ensemble learning algorithm to reduce the prediction errors of a single model. The experimental study is implemented on a real dataset from a company in Hunan Province, China. We test the model in different forecasting periods. The results of multiple experiments and comparison with several state-of-the-art models show that the proposed approach has higher prediction accuracy

    Short-Term Industrial Load Forecasting Based on Ensemble Hidden Markov Model

    Get PDF
    Short-term load forecasting (STLF) for industrial customers has been an essential task to reduce the cost of energy transaction and promote the stable operation of smart grid throughout the development of the modern power system. Traditional STLF methods commonly focus on establishing the non-linear relationship between loads and features, but ignore the temporal relationship between them. In this paper, an STLF method based on ensemble hidden Markov model (e-HMM) is proposed to track and learn the dynamic characteristics of industrial customer’s consumption patterns in correlated multivariate time series, thereby improving the prediction accuracy. Specifically, a novel similarity measurement strategy of log-likelihood space is designed to calculate the log-likelihood value of the multivariate time series in sliding time windows, which can effectively help the hidden Markov model (HMM) to capture the dynamic temporal characteristics from multiple historical sequences in similar patterns, so that the prediction accuracy is greatly improved. In order to improve the generalization ability and stability of a single HMM, we further adopt the framework of Bagging ensemble learning algorithm to reduce the prediction errors of a single model. The experimental study is implemented on a real dataset from a company in Hunan Province, China. We test the model in different forecasting periods. The results of multiple experiments and comparison with several state-of-the-art models show that the proposed approach has higher prediction accuracy

    Modeling, forecasting and trading the EUR exchange rates with hybrid rolling genetic algorithms: support vector regression forecast combinations

    Get PDF
    The motivation of this paper is to introduce a hybrid Rolling Genetic Algorithm-Support Vector Regression (RG-SVR) model for optimal parameter selection and feature subset combination. The algorithm is applied to the task of forecasting and trading the EUR/USD, EUR/GBP and EUR/JPY exchange rates. The proposed methodology genetically searches over a feature space (pool of individual forecasts) and then combines the optimal feature subsets (SVR forecast combinations) for each exchange rate. This is achieved by applying a fitness function specialized for financial purposes and adopting a sliding window approach. The individual forecasts are derived from several linear and non-linear models. RG-SVR is benchmarked against genetically and non-genetically optimized SVRs and SVMs models that are dominating the relevant literature, along with the robust ARBF-PSO neural network. The statistical and trading performance of all models is investigated during the period of 1999–2012. As it turns out, RG-SVR presents the best performance in terms of statistical accuracy and trading efficiency for all the exchange rates under study. This superiority confirms the success of the implemented fitness function and training procedure, while it validates the benefits of the proposed algorithm

    An interpretable multi-stage forecasting framework for energy consumption and CO2 emissions for the transportation sector

    Get PDF
    The transportation sector is deemed one of the primary sources of energy consumption and greenhouse gases throughout the world. To realise and design sustainable transport, it is imperative to comprehend relationships and evaluate interactions among a set of variables, which may influence transport energy consumption and CO2 emissions. Unlike recent published papers, this study strives to achieve a balance between machine learning (ML) model accuracy and model interpretability using the Shapley additive explanation (SHAP) method for forecasting the energy consumption and CO2 emissions in the UK's transportation sector. To this end, this paper proposes an interpretable multi-stage forecasting framework to simultaneously maximise the ML model accuracy and determine the relationship between the predictions and the influential variables by revealing the contribution of each variable to the predictions. For the UK's transportation sector, the experimental results indicate that road carbon intensity is found to be the most contributing variable to both energy consumption and CO2 emissions predictions. Unlike other studies, population and GDP per capita are found to be uninfluential variables. The proposed multi-stage forecasting framework may assist policymakers in making more informed energy decisions and establishing more accurate investment
    corecore