SUPERVISED NEURAL NETWORK TRAINING USING THE MINIMUM ERROR ENTROPY CRITERION WITH VARIABLE-SIZE AND FINITE-SUPPORT KERNEL ESTIMATES

Abstract

Abstract. The insufficiency of mere second-order statistics in many application areas have been discovered and more advanced concepts including higher-order statistics, especially those stemming from information theory like error entropy minimization are now being studied and applied in many contexts by researchers in machine learning and signal processing. The main drawback of using minimization of output error entropy for adaptive system training is the computational load when fixed-size kernel estimates are employed. Entropy estimators based on sample spacing, on the other hand, have lower computational cost, however they are not differentiable, which makes them unsuitable for adaptive learning. In this paper, a nonparametric entropy estimator that blends the desirable properties of both techniques in a variable-size finite-support kernel estimation methodology. This yields an estimator suitable for adaptation, yet has computational complexity similar to sample spacing techniques. The estimator is illustrated in supervised adaptive system training using the minimum error entropy criterion. I. INTODUCTION Since the earlier work of Wiener on adaptive filtering mean square error (MSE) has been used as a widely accepted criterion for adaptive system training Although Gaussianity assumption has proven to provide successful solutions for many practical problems, it is evident that this approach needs to be refined while dealing with non-linear systems. Moreover, the insufficiency of mere second-order statistics in many application areas have been discovered and more advanced concepts including higher-order statistics, especially those stemming from information theory are now being studied and applied in many contexts in machine learning and signal processing Entropy is introduced by Shannon as a measure of the average information in a given probability distribution function Since analytical data distributions are not available in many practical situations, in the plug-in approach to nonparametric entropy estimation In this paper we propose a continuously differentiable entropy estimation technique based on a variable-size finite-support kernel entropy estimator tha

    Similar works

    Full text

    thumbnail-image

    Available Versions