3 research outputs found

    A Gegenbauer Neural Network with Regularized Weights Direct Determination for Classification

    Full text link
    Single-hidden layer feed forward neural networks (SLFNs) are widely used in pattern classification problems, but a huge bottleneck encountered is the slow speed and poor performance of the traditional iterative gradient-based learning algorithms. Although the famous extreme learning machine (ELM) has successfully addressed the problems of slow convergence, it still has computational robustness problems brought by input weights and biases randomly assigned. Thus, in order to overcome the aforementioned problems, in this paper, a novel type neural network based on Gegenbauer orthogonal polynomials, termed as GNN, is constructed and investigated. This model could overcome the computational robustness problems of ELM, while still has comparable structural simplicity and approximation capability. Based on this, we propose a regularized weights direct determination (R-WDD) based on equality-constrained optimization to determine the optimal output weights. The R-WDD tends to minimize the empirical risks and structural risks of the network, thus to lower the risk of over fitting and improve the generalization ability. This leads us to a the final GNN with R-WDD, which is a unified learning mechanism for binary and multi-class classification problems. Finally, as is verified in the various comparison experiments, GNN with R-WDD tends to have comparable (or even better) generalization performances, computational scalability and efficiency, and classification robustness, compared to least square support vector machine (LS-SVM), ELM with Gaussian kernel

    Deep Learning with the Random Neural Network and its Applications

    Full text link
    The random neural network (RNN) is a mathematical model for an "integrate and fire" spiking network that closely resembles the stochastic behaviour of neurons in mammalian brains. Since its proposal in 1989, there have been numerous investigations into the RNN's applications and learning algorithms. Deep learning (DL) has achieved great success in machine learning. Recently, the properties of the RNN for DL have been investigated, in order to combine their power. Recent results demonstrate that the gap between RNNs and DL can be bridged and the DL tools based on the RNN are faster and can potentially be used with less energy expenditure than existing methods.Comment: 23 pages, 19 figure

    Deep Learning with the Random Neural Network and its Applications

    Full text link
    The random neural network (RNN) is a mathematical model for an "integrate and fire" spiking network that closely resembles the stochastic behaviour of neurons in mammalian brains. Since its proposal in 1989, there have been numerous investigations into the RNN's applications and learning algorithms. Deep learning (DL) has achieved great success in machine learning. Recently, the properties of the RNN for DL have been investigated, in order to combine their power. Recent results demonstrate that the gap between RNNs and DL can be bridged and the DL tools based on the RNN are faster and can potentially be used with less energy expenditure than existing methods.Comment: 23 pages, 19 figure
    corecore