9,487 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization

    Full text link
    We show that a large class of Estimation of Distribution Algorithms, including, but not limited to, Covariance Matrix Adaption, can be written as a Monte Carlo Expectation-Maximization algorithm, and as exact EM in the limit of infinite samples. Because EM sits on a rigorous statistical foundation and has been thoroughly analyzed, this connection provides a new coherent framework with which to reason about EDAs

    Multivariate time series analysis for short-term forecasting of ground level ozone (O3) in Malaysia

    Get PDF
    The declining of air quality mostly affects the elderly, children, people with asthma, as well as a restriction on outdoor activities. Therefore, there is an importance to provide a statistical modelling to forecast the future values of surface layer ozone (O3) concentration. The objectives of this study are to obtain the best multivariate time series (MTS) model and develop an online air quality forecasting system for O3 concentration in Malaysia. The implementations of MTS model improve the recent statistical model on air quality for short-term prediction. Ten air quality monitoring stations situated at four (4) different types of location were selected in this study. The first type is industrial represent by Pasir Gudang, Perai, and Nilai, second type is urban represent by Kuala Terengganu, Kota Bharu, and Alor Setar. The third is suburban located in Banting, Kangar, and Tanjung Malim, also the only background station at Jerantut. The hourly record data from 2010 to 2017 were used to assess the characteristics and behaviour of O3 concentration. Meanwhile, the monthly record data of O3, particulate matter (PM10), nitrogen dioxide (NO2), sulphur dioxide (SO2), carbon monoxide (CO), temperature (T), wind speed (WS), and relative humidity (RH) were used to examine the best MTS models. Three methods of MTS namely vector autoregressive (VAR), vector moving average (VMA), and vector autoregressive moving average (VARMA), has been applied in this study. Based on the performance error, the most appropriate MTS model located in Pasir Gudang, Kota Bharu and Kangar is VAR(1), Kuala Terengganu and Alor Setar for VAR(2), Perai and Nilai for VAR(3), Tanjung Malim for VAR(4) and Banting for VAR(5). Only Jerantut obtained the VMA(2) as the best model. The lowest root mean square error (RMSE) and normalized absolute error is 0.0053 and <0.0001 which is for MTS model in Perai and Kuala Terengganu, respectively. Meanwhile, for mean absolute error (MAE), the lowest is in Banting and Jerantut at 0.0013. The online air quality forecasting system for O3 was successfully developed based on the best MTS models to represent each monitoring station

    Hybridization of multi-objective deterministic particle swarm with derivative-free local searches

    Get PDF
    The paper presents a multi-objective derivative-free and deterministic global/local hybrid algorithm for the efficient and effective solution of simulation-based design optimization (SBDO) problems. The objective is to show how the hybridization of two multi-objective derivative-free global and local algorithms achieves better performance than the separate use of the two algorithms in solving specific SBDO problems for hull-form design. The proposed method belongs to the class of memetic algorithms, where the global exploration capability of multi-objective deterministic particle swarm optimization is enriched by exploiting the local search accuracy of a derivative-free multi-objective line-search method. To the authors best knowledge, studies are still limited on memetic, multi-objective, deterministic, derivative-free, and evolutionary algorithms for an effective and efficient solution of SBDO for hull-form design. The proposed formulation manages global and local searches based on the hypervolume metric. The hybridization scheme uses two parameters to control the local search activation and the number of function calls used by the local algorithm. The most promising values of these parameters were identified using forty analytical tests representative of the SBDO problem of interest. The resulting hybrid algorithm was finally applied to two SBDO problems for hull-form design. For both analytical tests and SBDO problems, the hybrid method achieves better performance than its global and local counterparts

    Training a Feed-forward Neural Network with Artificial Bee Colony Based Backpropagation Method

    Full text link
    Back-propagation algorithm is one of the most widely used and popular techniques to optimize the feed forward neural network training. Nature inspired meta-heuristic algorithms also provide derivative-free solution to optimize complex problem. Artificial bee colony algorithm is a nature inspired meta-heuristic algorithm, mimicking the foraging or food source searching behaviour of bees in a bee colony and this algorithm is implemented in several applications for an improved optimized outcome. The proposed method in this paper includes an improved artificial bee colony algorithm based back-propagation neural network training method for fast and improved convergence rate of the hybrid neural network learning method. The result is analysed with the genetic algorithm based back-propagation method, and it is another hybridized procedure of its kind. Analysis is performed over standard data sets, reflecting the light of efficiency of proposed method in terms of convergence speed and rate.Comment: 14 Pages, 11 figure

    Supervised learning with hybrid global optimisation methods

    Get PDF

    A hybrid swarm-based algorithm for single-objective optimization problems involving high-cost analyses

    Full text link
    In many technical fields, single-objective optimization procedures in continuous domains involve expensive numerical simulations. In this context, an improvement of the Artificial Bee Colony (ABC) algorithm, called the Artificial super-Bee enhanced Colony (AsBeC), is presented. AsBeC is designed to provide fast convergence speed, high solution accuracy and robust performance over a wide range of problems. It implements enhancements of the ABC structure and hybridizations with interpolation strategies. The latter are inspired by the quadratic trust region approach for local investigation and by an efficient global optimizer for separable problems. Each modification and their combined effects are studied with appropriate metrics on a numerical benchmark, which is also used for comparing AsBeC with some effective ABC variants and other derivative-free algorithms. In addition, the presented algorithm is validated on two recent benchmarks adopted for competitions in international conferences. Results show remarkable competitiveness and robustness for AsBeC.Comment: 19 pages, 4 figures, Springer Swarm Intelligenc
    corecore