1,759 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A Novel Progressive Multi-label Classifier for Classincremental Data

    Full text link
    In this paper, a progressive learning algorithm for multi-label classification to learn new labels while retaining the knowledge of previous labels is designed. New output neurons corresponding to new labels are added and the neural network connections and parameters are automatically restructured as if the label has been introduced from the beginning. This work is the first of the kind in multi-label classifier for class-incremental learning. It is useful for real-world applications such as robotics where streaming data are available and the number of labels is often unknown. Based on the Extreme Learning Machine framework, a novel universal classifier with plug and play capabilities for progressive multi-label classification is developed. Experimental results on various benchmark synthetic and real datasets validate the efficiency and effectiveness of our proposed algorithm.Comment: 5 pages, 3 figures, 4 table

    Neural Networks: Training and Application to Nonlinear System Identification and Control

    Get PDF
    This dissertation investigates training neural networks for system identification and classification. The research contains two main contributions as follow:1. Reducing number of hidden layer nodes using a feedforward componentThis research reduces the number of hidden layer nodes and training time of neural networks to make them more suited to online identification and control applications by adding a parallel feedforward component. Implementing the feedforward component with a wavelet neural network and an echo state network provides good models for nonlinear systems.The wavelet neural network with feedforward component along with model predictive controller can reliably identify and control a seismically isolated structure during earthquake. The network model provides the predictions for model predictive control. Simulations of a 5-story seismically isolated structure with conventional lead-rubber bearings showed significant reductions of all response amplitudes for both near-field (pulse) and far-field ground motions, including reduced deformations along with corresponding reduction in acceleration response. The controller effectively regulated the apparent stiffness at the isolation level. The approach is also applied to the online identification and control of an unmanned vehicle. Lyapunov theory is used to prove the stability of the wavelet neural network and the model predictive controller. 2. Training neural networks using trajectory based optimization approachesTraining neural networks is a nonlinear non-convex optimization problem to determine the weights of the neural network. Traditional training algorithms can be inefficient and can get trapped in local minima. Two global optimization approaches are adapted to train neural networks and avoid the local minima problem. Lyapunov theory is used to prove the stability of the proposed methodology and its convergence in the presence of measurement errors. The first approach transforms the constraint satisfaction problem into unconstrained optimization. The constraints define a quotient gradient system (QGS) whose stable equilibrium points are local minima of the unconstrained optimization. The QGS is integrated to determine local minima and the local minimum with the best generalization performance is chosen as the optimal solution. The second approach uses the QGS together with a projected gradient system (PGS). The PGS is a nonlinear dynamical system, defined based on the optimization problem that searches the components of the feasible region for solutions. Lyapunov theory is used to prove the stability of PGS and QGS and their stability under presence of measurement noise

    FIR Digital Filter and Neural Network Design using Harmony Search Algorithm

    Get PDF
    Harmony Search (HS) is an emerging metaheuristic algorithm inspired by the improvisation process of jazz musicians. In the HS algorithm, each musician (= decision variable) plays (= generates) a note (= a value) for finding the best harmony (= global optimum) all together. This algorithm has been employed to cope with numerous tasks in the past decade. In this thesis, HS algorithm has been applied to design digital filters of orders 24 and 48 as well as the parameters of neural network problems. Both multiobjective and single objective optimization techniques were applied to design FIR digital filters. 2-dimensional digital filters can be used for image processing and neural networks can be used for medical image diagnosis. Digital filter design using Harmony Search Algorithm can achieve results close to Parks McClellan Algorithm which shows that the algorithm is capable of solving complex engineering problems. Harmony Search is able to optimize the parameter values of feedforward network problems and fuzzy inference neural networks. The performance of a designed neural network was tested by introducing various noise levels at the testing inputs and the output of the neural networks with noise was compared to that without noise. It was observed that, even if noise is being introduced to the testing input there was not much difference in the output. Design results were obtained within a reasonable amount of time using Harmony Search Algorithm

    A Nonparametric Approach to Pricing Options Learning Networks

    Get PDF
    For practitioners of equity markets, option pricing is a major challenge during high volatility periods and Black-Scholes formula for option pricing is not the proper tool for very deep out-of-the-money options. The Black-Scholes pricing errors are larger in the deeper out-of-the money options relative to the near the-money options, and it's mispricing worsens with increased volatility. Experts opinion is that the Black-Scholes model is not the proper pricing tool in high volatility situations especially for very deep out-of-the-money options. They also argue that prior to the 1987 crash, volatilities were symmetric around zero moneyness, with in-the-money and out-of-the money having higher implied volatilities than at-the-money options. However, after the crash, the call option implied volatilities were decreasing monotonically as the call went deeper into out-of-the-money, while the put option implied volatilities were decreasing monotonically as the put went deeper into in-the-money. Since these findings cannot be explained by the Black-Scholes model and its variations, researchers searched for improved option pricing models. Feedforward networks provide more accurate pricing estimates for the deeper out-of-the money options and handles pricing during high volatility with considerably lower errors for out-of-the-money call and put options. This could be invaluable information for practitioners as option pricing is a major challenge during high volatility periods. In this article a nonparametric method for estimating S&P 100 index option prices using artificial neural networks is presented. To show the value of artificial neural network pricing formulas, Black-Scholes option prices are compared with the network prices against market prices. To illustrate the practical relevance of the network pricing approach, it is applied to the pricing of S&P 100 index options from April 4, 2014 to April 9, 2014. On the five days data while Black-Scholes formula prices have a mean 10.17errorforputs,and10.17 error for puts, and 1.98 for calls, while neural network’s error is less than 5forputs,and5 for puts, and 1 for calls
    corecore