106 research outputs found

    Comparative analysis of firefly algorithm for solving optimization problems

    Get PDF
    Firefly algorithm was developed by Xin-She Yang [1] by taking inspiration from flash light signals which is the source of attraction among fireflies for potential mates. All the fireflies are unisexual and attract each other according to the intensities of their flash lights. Higher the flash light intensity, higher is the power of attraction and vice versa. For solving optimization problem, the brightness of flash is associated with the fitness function to be optimized. The light intensity I (r) of a firefly at distance r is given by equation (1

    Forecasting the Behavior of Gas Furnace Multivariate Time Series Using Ridge Polynomial Based Neural Network Models

    Get PDF
    In this paper, a new application of ridge polynomial based neural network models in multivariate time series forecasting is presented. The existing ridge polynomial based neural network models can be grouped into two groups. Group A consists of models that use only autoregressive inputs, whereas Group B consists of models that use autoregressive and moving-average (i.e., error feedback) inputs. The well-known Box-Jenkins gas furnace multivariate time series was used in the forecasting comparison between the two groups. Simulation results show that the models in Group B achieve significant forecasting performance as compared to the models in Group A. Therefore, the Box-Jenkins gas furnace data can be modeled better using neural networks when error feedback is used

    Global gbest guided-artificial bee colony algorithm for numerical function optimization

    Get PDF
    Numerous computational algorithms are used to obtain a high performance in solving mathematics, engineering and statistical complexities. Recently, an attractive bio-inspired method—namely the Artificial Bee Colony (ABC)—has shown outstanding performance with some typical computational algorithms in different complex problems. The modification, hybridization and improvement strategies made ABC more attractive to science and engineering researchers. The two well-known honeybees-based upgraded algorithms, Gbest Guided Artificial Bee Colony (GGABC) and Global Artificial Bee Colony Search (GABCS), use the foraging behavior of the global best and guided best honeybees for solving complex optimization tasks. Here, the hybrid of the above GGABC and GABC methods is called the 3G-ABC algorithm for strong discovery and exploitation processes. The proposed and typical methods were implemented on the basis of maximum fitness values instead of maximum cycle numbers, which has provided an extra strength to the proposed and existing methods. The experimental results were tested with sets of fifteen numerical benchmark functions. The obtained results from the proposed approach are compared with the several existing approaches such as ABC, GABC and GGABC, result and found to be very profitable. Finally, obtained results are verified with some statistical testing

    A comprehensive survey on pi-sigma neural network for time series prediction

    Get PDF
    Prediction of time series grabs received much attention because of its effect on the vast range of real life applications. This paper presents a survey of time series applications using Higher Order Neural Network (HONN) model. The basic motivation behind using HONN is the ability to expand the input space, to solve complex problems it becomes more efficient and perform high learning abilities of the time series forecasting. Pi-Sigma Neural Network (PSNN) includes indirectly the capabilities of higher order networks using product cells as the output units and less number of weights. The goal of this research is to present the reader awareness about PSNN for time series prediction, to highlight some benefits and challenges using PSNN. Possible fields of PSNN applications in comparison with existing methods are presented and future directions are also explored in advantage with the properties of error feedback and recurrent networks

    Hybrid of firefly algorithm and pattern search for solving optimization problems

    Get PDF
    Firefly algorithm (FA) is a newly introduced meta-heuristic, nature-inspired, stochastic algorithm for solving various types of optimization problems. FA takes inspiration from natural phenomenon of light emission by fireflies and is one of the robust and easily implementable algorithms. The standard FA consists of three stages namely initialization, firefly position changing stage and termination stage. A major drawback associated with standard FA in its termination stage is its failure in getting the most optimal value due to the fact that after a fixed number of iterations, no significant improvement can be observed in the solution quality. In this paper, this issue is resolved by introducing pattern search (PS) at the termination stage of standard FA when there is no further improvement in the solution quality. The proposed approach consists of three stages. In the first stage, the parameters of standard FA are initialized. In the firefly changing position stage, the randomization factor is used to update the solution in each iteration of operational stages. In the final stage, the optimized values obtained from the FA during its maximum number of iteration are given as inputs to the pattern search algorithm. The pattern search is an optimization algorithm that further optimizes the values obtained in the maximum iterations of standard FA. The proposed technique has been named as FA-PS in which PS has been used to introduce enhancement in the solution quality of standard FA. The developed approach has been applied to various types of maximization and minimization functions and the performance has been compared with standard FA and genetic algorithm in terms of getting the most optimal values for the functions being considered. A significant improvement has been observed in the solution quality of FA

    A quick gbest guided artificial bee colony algorithm for stock market prices prediction

    Get PDF
    The objective of this work is to present a Quick Gbest Guided artificial bee colony (ABC) learning algorithm to train the feedforward neural network (QGGABC-FFNN) model for the prediction of the trends in the stock markets. As it is quite important to know that nowadays, stock market prediction of trends is a significant financial global issue. The scientists, finance administration, companies, and leadership of a given country struggle towards developing a strong financial position. Several technical, industrial, fundamental, scientific, and statistical tools have been proposed and used with varying results. Still, predicting an exact or near-to-exact trend of the Stock Market values behavior is an open problem. In this respect, in the present manuscript, we propose an algorithm based on ABC to minimize the error in the trend and actual values by using the hybrid technique based on neural network and artificial intelligence. The presented approach has been verified and tested to predict the accurate trend of Saudi Stock Market (SSM) values. The proposed QGGABC-ANN based on bio-inspired learning algorithm with its high degree of accuracy could be used as an investment advisor for the investors and traders in the future of SSM. The proposed approach is based mainly on SSM historical data covering a large span of time. From the simulation findings, the proposed QGGABC-FFNN outperformed compared with other typical computational algorithms for prediction of SSM values

    The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems

    Get PDF
    The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on  Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm
    • …
    corecore