1,129 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Optimization of ANN Structure Using Adaptive PSO & GA and Performance Analysis Based on Boolean Identities

    Get PDF
    In this paper, a novel heuristic structure optimization technique is proposed for Neural Network using Adaptive PSO & GA on Boolean identities to improve the performance of Artificial Neural Network (ANN). The selection of the optimal number of hidden layers and nodes has a significant impact on the performance of a neural network, is decided in an adhoc manner. The optimization of architecture and weights of neural network is a complex task. In this regard the use of evolutionary techniques based on Adaptive Particle Swarm Optimization (APSO) & Adaptive Genetic Algorithm (AGA) is used for selecting an optimal number of hidden layers and nodes of the neural controller, for better performance and low training errors through Boolean identities. The hidden nodes are adapted through the generation until they reach the optimal number. The Boolean operators such as AND, OR, XOR have been used for performance analysis of this technique

    Learning Opposites Using Neural Networks

    Full text link
    Many research works have successfully extended algorithms such as evolutionary algorithms, reinforcement agents and neural networks using "opposition-based learning" (OBL). Two types of the "opposites" have been defined in the literature, namely \textit{type-I} and \textit{type-II}. The former are linear in nature and applicable to the variable space, hence easy to calculate. On the other hand, type-II opposites capture the "oppositeness" in the output space. In fact, type-I opposites are considered a special case of type-II opposites where inputs and outputs have a linear relationship. However, in many real-world problems, inputs and outputs do in fact exhibit a nonlinear relationship. Therefore, type-II opposites are expected to be better in capturing the sense of "opposition" in terms of the input-output relation. In the absence of any knowledge about the problem at hand, there seems to be no intuitive way to calculate the type-II opposites. In this paper, we introduce an approach to learn type-II opposites from the given inputs and their outputs using the artificial neural networks (ANNs). We first perform \emph{opposition mining} on the sample data, and then use the mined data to learn the relationship between input xx and its opposite x˘\breve{x}. We have validated our algorithm using various benchmark functions to compare it against an evolving fuzzy inference approach that has been recently introduced. The results show the better performance of a neural approach to learn the opposites. This will create new possibilities for integrating oppositional schemes within existing algorithms promising a potential increase in convergence speed and/or accuracy.Comment: To appear in proceedings of the 23rd International Conference on Pattern Recognition (ICPR 2016), Cancun, Mexico, December 201

    Glowworm swarm optimisation for training multi-layer perceptrons

    Get PDF

    Impact of noise on a dynamical system: prediction and uncertainties from a swarm-optimized neural network

    Get PDF
    In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey--Glass chaotic time series in the short-term x(t+6)x(t+6). The performance prediction was evaluated and compared with another studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called {\it stochastic} hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute uncertainties of predictions for noisy Mackey--Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σN\sigma_{N}) from 0.01 to 0.1.Comment: 11 pages, 8 figure

    Comparison of Particle Swarm Optimization and Backpropagation as Training Algorithms for Neural Networks

    Get PDF
    Particle swarm optimization (PSO) motivated by the social behavior of organisms, is a step up to existing evolutionary algorithms for optimization of continuous nonlinear functions. Backpropagation (BP) is generally used for neural network training. Choosing a proper algorithm for training a neural network is very important. In this paper, a comparative study is made on the computational requirements of the PSO and BP as training algorithms for neural networks. Results are presented for a feedforward neural network learning a nonlinear function and these results show that the feedforward neural network weights converge faster with the PSO than with the BP algorithm
    • …
    corecore