1 research outputs found

    Neural Classifier Construction Using Regularization, Pruning and Test Error Estimation

    No full text
    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction with Optimal Brain Damage pruning, a test error estimate is used to select the network architecture. The scheme is evaluated on four classification problems. Keywords: Neural classifiers, Architecture optimization, Regularization, Generalization estimation. Neural Classifier Construction 1 1 INTRODUCTION Pattern recognition is an important aspect of most scientific fields and indeed the objective of most neural network applications. Some of the classic applications of neural networks like Sejnowski and Rosenbergs "NetTalk" concern classification of patterns into a finite number of categories. In modern approaches to pattern recognition the objective is to produce class probabilities for a ..
    corecore