48 research outputs found

    Evolution of the parameters of the activation functions in the RNN.

    No full text
    <p>The training data set “MG” is used. (A) The gain parameter . (B) The bias parameter .</p

    Structure of the feedforward neural networks.

    No full text
    <p>Structure of the feedforward neural networks.</p

    Relation between the training result and the number of neurons of the RNN.

    No full text
    <p>Training results after 1000-epoch training for the case of the training data set “MG” are presented. The circle markers denote the results obtained by the MEE algorithm, and the cross markers denote the results obtained by the synergistic algorithm. (A) Results of the quadratic information potential. (B) Results of the mean square error.</p

    Decomposition of the FNN.

    No full text
    <p>(A) The input layer and the hidden layer of the FNN. (B) The output layer of the FNN.</p

    Evolution of the parameters of the activation functions in the FNN.

    No full text
    <p>The training data set “MG” is used. (A) Mean of the gain parameter of the five hidden neurons. (B) Mean of the bias parameter of the five hidden neurons. (C) The gain parameter of the output neuron. (D) The bias parameter of the output neuron.</p

    Relation between the training result and the number of hidden neurons of the FNN.

    No full text
    <p>Training results after 1000-epoch training for the case of the training data set “MG” are presented. The circle markers denote the results obtained by the MEE algorithm, and the cross markers denote the results obtained by the synergistic algorithm. (A) Results of the quadratic information potential. (B) Results of the mean square error.</p

    Learning curves of the FNN with different IP learning rates.

    No full text
    <p>The training data set “MG” is used. The initial IP learning rates , , , and (no IP) are used for comparison. Learning curves of the quadratic information potential: (A) 300 epochs. (B) 1000 epochs. Learning curves of the mean square error: (C) 300 epochs. (D) 1000 epochs.</p

    Learning curves of the mean square error by the RNN.

    No full text
    <p>The dashed lines denote the learning curves of the MEE algorithm, and the solid lines denote the learning curves of the synergistic algorithm. (A) 300-epoch learning curves for the training data set “MG”. (B) 1000-epoch learning curves of “MG”. (C) 300-epoch learning curves for the training data set “SS”. (D) 1000-epoch learning curves of “SS”.</p

    Learning curves of the quadratic information potential by the RNN.

    No full text
    <p>The dashed lines denote the learning curves of the MEE algorithm, and the solid lines denote the learning curves of the synergistic algorithm. (A) 300-epoch learning curves for the training data set “MG”. (B) 1000-epoch learning curves of “MG”. (C) 300-epoch learning curves for the training data set “SS”. (D) 1000-epoch learning curves of “SS”.</p

    Performance comparison for the RNN using “SS”.

    No full text
    <p>Performance comparison for the RNN using “SS”.</p
    corecore