1 research outputs found

    On the training of feedforward neural networks.

    Get PDF
    by Hau-san Wong.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves [178-183]).Chapter 1 --- INTRODUCTIONChapter 1.1 --- Learning versus Explicit Programming --- p.1-1Chapter 1.2 --- Artificial Neural Networks --- p.1-2Chapter 1.3 --- Learning in ANN --- p.1-3Chapter 1.4 --- Problems of Learning in BP Networks --- p.1-5Chapter 1.5 --- Dynamic Node Architecture for BP Networks --- p.1-7Chapter 1.6 --- Incremental Learning --- p.1-10Chapter 1.7 --- Research Objective and Thesis Organization --- p.1-11Chapter 2 --- THE FEEDFORWARD MULTILAYER NEURAL NETWORKChapter 2.1 --- The Perceptron --- p.2-1Chapter 2.2 --- The Generalization of the Perceptron --- p.2-4Chapter 2.3 --- The Multilayer Feedforward Network --- p.2-5Chapter 3 --- SOLUTIONS TO THE BP LEARNING PROBLEMChapter 3.1 --- Introduction --- p.3-1Chapter 3.2 --- Attempts in the Establishment of a Viable Hidden Representation Model --- p.3-5Chapter 3.3 --- Dynamic Node Creation Algorithms --- p.3-9Chapter 3.4 --- Concluding Remarks --- p.3-15Chapter 4 --- THE GROWTH ALGORITHM FOR NEURAL NETWORKSChapter 4.1 --- Introduction --- p.4-2Chapter 4.2 --- The Radial Basis Function --- p.4-6Chapter 4.3 --- The Additional Input Node and the Modified Nonlinearity --- p.4-9Chapter 4.4 --- The Initialization of the New Hidden Node --- p.4-11Chapter 4.5 --- Initialization of the First Node --- p.4-15Chapter 4.6 --- Practical Considerations for the Growth Algorithm --- p.4-18Chapter 4.7 --- The Convergence Proof for the Growth Algorithm --- p.4-20Chapter 4.8 --- The Flow of the Growth Algorithm --- p.4-21Chapter 4.9 --- Experimental Results and Performance Analysis --- p.4-21Chapter 4.10 --- Concluding Remarks --- p.4-33Chapter 5 --- KNOWLEDGE REPRESENTATION IN NEURAL NETWORKSChapter 5.1 --- An Alternative Perspective to Knowledge Representation in Neural Network: The Temporal Vector (T-Vector) Approach --- p.5-1Chapter 5.2 --- Prior Research Works in the T-Vector Approach --- p.5-2Chapter 5.3 --- Formulation of the T-Vector Approach --- p.5-3Chapter 5.4 --- Relation of the Hidden T-Vectors to the Output T-Vectors --- p.5-6Chapter 5.5 --- Relation of the Hidden T-Vectors to the Input T-Vectors --- p.5-10Chapter 5.6 --- An Inspiration for a New Training Algorithm from the Current Model --- p.5-12Chapter 6 --- THE DETERMINISTIC TRAINING ALGORITHM FOR NEURAL NETWORKSChapter 6.1 --- Introduction --- p.6-1Chapter 6.2 --- The Linear Independency Requirement for the Hidden T-Vectors --- p.6-3Chapter 6.3 --- Inspiration of the Current Work from the Barmann T-Vector Model --- p.6-5Chapter 6.4 --- General Framework of Dynamic Node Creation Algorithm --- p.6-10Chapter 6.5 --- The Deterministic Initialization Scheme for the New Hidden NodesChapter 6.5.1 --- Introduction --- p.6-12Chapter 6.5.2 --- Determination of the Target T-VectorChapter 6.5.2.1 --- Introduction --- p.6-15Chapter 6.5.2.2 --- Modelling of the Target Vector βQhQ --- p.6-16Chapter 6.5.2.3 --- Near-Linearity Condition for the Sigmoid Function --- p.6-18Chapter 6.5.3 --- Preparation for the BP Fine-Tuning Process --- p.6-24Chapter 6.5.4 --- Determination of the Target Hidden T-Vector --- p.6-28Chapter 6.5.5 --- Determination of the Hidden Weights --- p.6-29Chapter 6.5.6 --- Determination of the Output Weights --- p.6-30Chapter 6.6 --- Linear Independency Assurance for the New Hidden T-Vector --- p.6-30Chapter 6.7 --- Extension to the Multi-Output Case --- p.6-32Chapter 6.8 --- Convergence Proof for the Deterministic Algorithm --- p.6-35Chapter 6.9 --- The Flow of the Deterministic Dynamic Node Creation Algorithm --- p.6-36Chapter 6.10 --- Experimental Results and Performance Analysis --- p.6-36Chapter 6.11 --- Concluding Remarks --- p.6-50Chapter 7 --- THE GENERALIZATION MEASURE MONITORING SCHEMEChapter 7.1 --- The Problem of Generalization for Neural Networks --- p.7-1Chapter 7.2 --- Prior Attempts in Solving the Generalization Problem --- p.7-2Chapter 7.3 --- The Generalization Measure --- p.7-4Chapter 7.4 --- The Adoption of the Generalization Measure to the Deterministic Algorithm --- p.7-5Chapter 7.5 --- Monitoring of the Generalization Measure --- p.7-6Chapter 7.6 --- Correspondence between the Generalization Measure and the Generalization Capability of the Network --- p.7-8Chapter 7.7 --- Experimental Results and Performance Analysis --- p.7-12Chapter 7.8 --- Concluding Remarks --- p.7-16Chapter 8 --- THE ESTIMATION OF THE INITIAL HIDDEN LAYER SIZEChapter 8.1 --- The Need for an Initial Hidden Layer Size Estimation --- p.8-1Chapter 8.2 --- The Initial Hidden Layer Estimation Scheme --- p.8-2Chapter 8.3 --- The Extension of the Estimation Procedure to the Multi-Output Network --- p.8-6Chapter 8.4 --- Experimental Results and Performance Analysis --- p.8-6Chapter 8.5 --- Concluding Remarks --- p.8-16Chapter 9 --- CONCLUSIONChapter 9.1 --- Contributions --- p.9-1Chapter 9.2 --- Suggestions for Further Research --- p.9-3REFERENCES --- p.R-1APPENDIX --- p.A-
    corecore