7,330 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A Unifying Approach to Quaternion Adaptive Filtering: Addressing the Gradient and Convergence

    Full text link
    A novel framework for a unifying treatment of quaternion valued adaptive filtering algorithms is introduced. This is achieved based on a rigorous account of quaternion differentiability, the proposed I-gradient, and the use of augmented quaternion statistics to account for real world data with noncircular probability distributions. We first provide an elegant solution for the calculation of the gradient of real functions of quaternion variables (typical cost function), an issue that has so far prevented systematic development of quaternion adaptive filters. This makes it possible to unify the class of existing and proposed quaternion least mean square (QLMS) algorithms, and to illuminate their structural similarity. Next, in order to cater for both circular and noncircular data, the class of widely linear QLMS (WL-QLMS) algorithms is introduced and the subsequent convergence analysis unifies the treatment of strictly linear and widely linear filters, for both proper and improper sources. It is also shown that the proposed class of HR gradients allows us to resolve the uncertainty owing to the noncommutativity of quaternion products, while the involution gradient (I-gradient) provides generic extensions of the corresponding real- and complex-valued adaptive algorithms, at a reduced computational cost. Simulations in both the strictly linear and widely linear setting support the approach
    corecore