118 research outputs found

    Hybrid artificial intelligence algorithms for short-term load and price forecasting in competitive electric markets

    Get PDF
    The liberalization and deregulation of electric markets forced the various participants to accommodate several challenges, including: a considerable accumulation of new generation capacity from renewable sources (fundamentally wind energy), the unpredictability associated with these new forms of generation and new consumption patterns, contributing to further electricity prices volatility (e.g. the Iberian market). Given the competitive framework in which market participants operate, the existence of efficient computational forecasting techniques is a distinctive factor. Based on these forecasts a suitable bidding strategy and an effective generation systems operation planning is achieved, together with an improved installed transmission capacity exploitation, results in maximized profits, all this contributing to a better energy resources utilization. This dissertation presents a new hybrid method for load and electricity prices forecasting, for one day ahead time horizon. The optimization scheme presented in this method, combines the efforts from different techniques, notably artificial neural networks, several optimization algorithms and wavelet transform. The method’s validation was made using different real case studies. The subsequent comparison (accuracy wise) with published results, in reference journals, validated the proposed hybrid method suitability.O processo de liberalização e desregulação dos mercados de energia elétrica, obrigou os diversos participantes a acomodar uma série de desafios, entre os quais: a acumulação considerável de nova capacidade de geração proveniente de origem renovável (fundamentalmente energia eólica), a imprevisibilidade associada a estas novas formas de geração e novos padrões de consumo. Resultando num aumento da volatilidade associada aos preços de energia elétrica (como é exemplo o mercado ibérico). Dado o quadro competitivo em que os agentes de mercado operam, a existência de técnicas computacionais de previsão eficientes, constituí um fator diferenciador. É com base nestas previsões que se definem estratégias de licitação e se efetua um planeamento da operação eficaz dos sistemas de geração que, em conjunto com um melhor aproveitamento da capacidade de transmissão instalada, permite maximizar os lucros, realizando ao mesmo tempo um melhor aproveitamento dos recursos energéticos. Esta dissertação apresenta um novo método híbrido para a previsão da carga e dos preços da energia elétrica, para um horizonte temporal a 24 horas. O método baseia-se num esquema de otimização que reúne os esforços de diferentes técnicas, nomeadamente redes neuronais artificiais, diversos algoritmos de otimização e da transformada de wavelet. A validação do método foi feita em diferentes casos de estudo reais. A posterior comparação com resultados já publicados em revistas de referência, revelou um excelente desempenho do método hibrido proposto

    Generalized Minimum Error with Fiducial Points Criterion for Robust Learning

    Full text link
    The conventional Minimum Error Entropy criterion (MEE) has its limitations, showing reduced sensitivity to error mean values and uncertainty regarding error probability density function locations. To overcome this, a MEE with fiducial points criterion (MEEF), was presented. However, the efficacy of the MEEF is not consistent due to its reliance on a fixed Gaussian kernel. In this paper, a generalized minimum error with fiducial points criterion (GMEEF) is presented by adopting the Generalized Gaussian Density (GGD) function as kernel. The GGD extends the Gaussian distribution by introducing a shape parameter that provides more control over the tail behavior and peakedness. In addition, due to the high computational complexity of GMEEF criterion, the quantized idea is introduced to notably lower the computational load of the GMEEF-type algorithm. Finally, the proposed criterions are introduced to the domains of adaptive filter, kernel recursive algorithm, and multilayer perceptron. Several numerical simulations, which contain system identification, acoustic echo cancellation, times series prediction, and supervised classification, indicate that the novel algorithms' performance performs excellently.Comment: 12 pages, 9 figure

    Adaptive neural network cascade control system with entropy-based design

    Get PDF
    A neural network (NN) based cascade control system is developed, in which the primary PID controller is constructed by NN. A new entropy-based measure, named the centred error entropy (CEE) index, which is a weighted combination of the error cross correntropy (ECC) criterion and the error entropy criterion (EEC), is proposed to tune the NN-PID controller. The purpose of introducing CEE in controller design is to ensure that the uncertainty in the tracking error is minimised and also the peak value of the error probability density function (PDF) being controlled towards zero. The NN-controller design based on this new performance function is developed and the convergent conditions are. During the control process, the CEE index is estimated by a Gaussian kernel function. Adaptive rules are developed to update the kernel size in order to achieve more accurate estimation of the CEE index. This NN cascade control approach is applied to superheated steam temperature control of a simulated power plant system, from which the effectiveness and strength of the proposed strategy are discussed by comparison with NN-PID controllers tuned with EEC and ECC criterions

    An overview of the main machine learning models - from theory to algorithms

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsIn the context of solving highly complex problems, Artificial Intelligence shows an exponential growth over the past years allowing the Machine Learning to augment and sometimes to outperform the human learning. From driverless cars to automatic recommendation on Netflix, we are surrounded by AI, even if we do not notice it. Furthermore, companies have recently adopted new frameworks in their routines which are mainly composed by algorithms able to solve complex problems in a short period of time. The growth of AI technologies has been absolutely stunning and yes, it is only possible because a sub-field of AI called Machine Learning is growing even faster. In a small scale, Machine Learning may be seen as a simple system able to find patterns on data and learn from it. However, it is precisely that learning process that in a large scale will allow machines to mimic the human behavior and perform tasks that would eventually require human intelligence. Just for us to have an idea, according to Forbes the global Machine Learning market was evaluated in 1.7Bin2017anditisexpectedtoreachalmost1.7B in 2017 and it is expected to reach almost 21B in 2024. Naturally, Machine Learning has become an attractive and profitable scientific area that demands continuous learning since there is always something new being discovered. During the last decades, a huge number of algorithms have been proposed by the research community, which sometimes may cause some confusion of how and when to use each one of them. That is exactly what is pretended in this thesis, over the next chapters we are going to review the main Machine Learning models and their respective advantages/disadvantages

    The Shallow and the Deep:A biased introduction to neural networks and old school machine learning

    Get PDF
    The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility
    corecore