Enhancing extreme learning machines using cross-entropy moth-flame optimization algorithm

Abstract

Extreme Learning Machines (ELM) learn fast and eliminate the tuning of input weights and biases. However, ELM does not guarantee the optimal setting of the weights and biases due to random input parameters initialization. Therefore, ELM suffers from instability of output, large network size, and degrade generalization performance. To overcome these problems, an efficient co-evolutionary hybrid model namely as Cross-Entropy Moth-Flame Optimization (CEMFO-ELM) model is proposed to train a neural network for the selection of optimal input weights and biases. The hybrid model balanced the exploration and exploitation of the search space, and then selected optimal input weights and biases for ELM. The co-evolutionary algorithm reduced the chances of been trapped into the local extremum in the search space. Accuracy, stability, and percentage improvement ratio (PIR%) were the metrics used to evaluate the performance of the proposed model when simulated on some classification datasets for machine learning from the University of California, Irvine repository. The co-evolutionary scheme was compared with its constituent individual ELM-based enhanced meta-heuristic schemes (CE-ELM and MFO-ELM). The co-evolutionary meta-heuristic algorithm enhances the selection of optimal parameters for ELM. It improves the accuracy of ELM in all the simulations, and the stability of ELM was improved in all, up to 53% in Breast cancer simulation. Also, it has better convergences than the comparative ELM hybrid model in all the simulations

    Similar works