1 research outputs found

    Optimization of learning methods for face recognition using multilayer perceptrons

    No full text
    This paper discusses accelerated learning methods in the application of neural networks to the human-face recognition problem. New acceleration methods such as the Dynamic Learning Rate (DLR) Methods 1 and 2 and the Dynamic Momentum Factor (DMF), are introduced to optimize learning. These acceleration methods are evaluated against the conventional backpropagation (BP) and two other gradient-based optimization methods namely, conjugate gradient (CG) and steepest descent (SD). Numerical results clearly show that the convergence capability of these acceleration methods is superior to BP method. Although comparable to the CG and SD methods, the DLR and DMF methods are less complex and demand lesser computation time. The generalisation, rejection and noise capabilities of the resultant networks are also investigated and it is shown that these capabilities can be enhanced
    corecore