2,625 research outputs found

    Sawtooth Genetic Algorithm and its Application in Hammerstein Model identification and RBFN based stock Market Forecasting

    Get PDF
    This Project work has been divided into three parts. In the first part, we deal with the sawtooth genetic algorithm. In the second part, we use this algorithm for optimization of Hammerstein model. In the third part we implemented a stock market forecasting model based on radial basis function network tuned by sawtooth genetic algorithm

    Evolving Generalized Euclidean Distances for Training RBNN

    Get PDF
    In Radial Basis Neural Networks (RBNN), the activation of each neuron depends on the Euclidean distance between a pattern and the neuron center. Such a symmetrical activation assumes that all attributes are equally relevant, which might not be true. Non-symmetrical distances like Mahalanobis can be used. However, this distance is computed directly from the data covariance matrix and therefore the accuracy of the learning algorithm is not taken into account. In this paper, we propose to use a Genetic Algorithm to search for a generalized Euclidean distance matrix, that minimizes the error produced by a RBNN

    Financial Forecasting Using Evolutionary Computational Techniques

    Get PDF
    Financial forecasting or specially stock market prediction is one of the hottest field of research lately due to its commercial applications owing to high stakes and the kinds of attractive benefits that it has to offer. In this project we have analyzed various evolutionary computation algorithms for forecasting of financial data. The financial data has been taken from a large database and has been based on the stock prices in leading stock exchanges .We have based our models on data taken from Bombay Stock Exchange (BSE), S&P500 (Standard and Poor’s) and Dow Jones Industrial Average (DJIA). We have designed three models and compared those using historical data from the three stock exchanges. The models used were based on: 1. Radial Basis Function parameters updated by Particle swarm optimization. 2. Radial Basis Function parameters updated by Least Mean Square Algorithm. 3. FLANN parameters updated by Particle Swarm optimization. The raw input for the experiment is the historical daily open, close, high, low and volume of the concerned index. However the actual input to the model was the parameters derived from these data. The results of the experiment have been depicted with the aid of suitable curves where a comparative analysis of the various models is done on the basis on various parameters including error convergence and the Mean Average Percentage Error (MAPE). Key Words: Radial Basis Functions, FLANN, PSO, LM

    Forecasting currency exchange rate time series with fireworks-algorithm-based higher order neural network with special attention to training data enrichment

    Get PDF
    Exchange rates are highly fluctuating by nature, thus difficult to forecast. Artificial neural networks (ANN) have proved to be better than statistical methods. Inadequate training data may lead the model to reach suboptimal solution resulting, poor accuracy as ANN-based forecasts are data driven. To enhance forecasting accuracy, we suggests a method of enriching training dataset through exploring and incorporating of virtual data points (VDPs) by an evolutionary method called as fireworks algorithm trained functional link artificial neural network (FWA-FLN). The model maintains the correlation between the current and past data, especially at the oscillation point on the time series. The exploring of a VDP and forecast of the succeeding term go consecutively by the FWA-FLN. Real exchange rate time series are used to train and validate the proposed model. The efficiency of the proposed technique is related to other models trained similarly and produces far better prediction accuracy

    The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting

    Get PDF
    The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.Comment: under revie

    Data-Driven Forecasting of High-Dimensional Chaotic Systems with Long Short-Term Memory Networks

    Full text link
    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.Comment: 31 page

    Hybrid data intelligent models and applications for water level prediction

    Get PDF
    Artificial intelligence (AI) models have been successfully applied in modeling engineering problems, including civil, water resources, electrical, and structure. The originality of the presented chapter is to investigate a non-tuned machine learning algorithm, called self-adaptive evolutionary extreme learning machine (SaE-ELM), to formulate an expert prediction model. The targeted application of the SaE-ELM is the prediction of river water level. Developing such water level prediction and monitoring models are crucial optimization tasks in water resources management and flood prediction. The aims of this chapter are (1) to conduct a comprehensive survey for AI models in water level modeling, (2) to apply a relatively new ML algorithm (i.e., SaE-ELM) for modeling water level, (3) to examine two different time scales (e.g., daily and monthly), and (4) to compare the inspected model with the extreme learning machine (ELM) model for validation. In conclusion, the contribution of the current chapter produced an expert and highly optimized predictive model that can yield a high-performance accuracy

    Lévy mutation in artificial bee colony algorithm for gasoline price prediction

    Get PDF
    In this paper, a mutation strategy that is based on Lévy Probabily Distribution is introduced in Artificial Bee Colony algorithm. The purpose is to better exploit promising solutions found by the bees.Such an approach is used to improve the performance of the original ABC in optimizing Least Squares Support Vector Machine hyper parameters.From the conducted experiment, the proposed lvABC shows encouraging results in optimizing parameters of interest.The proposed.lvABC-LSSVM has outperformed existing prediction model, Backpropogation Neural Network (BPNN), in predicting gasoline price

    Dynamic non-linear system modelling using wavelet-based soft computing techniques

    Get PDF
    The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique
    corecore