4 research outputs found

    Simple estimate of the width in Gaussian kernel with adaptive scaling technique

    Get PDF
    This paper presents a simple method to estimate the width of Gaussian kernel based on an adaptive scaling technique. The Gaussian kernel is widely employed in radial basis function (RBF) network, support vector machine (SVM), least squares support vector machine (LS-SVM), Kriging models, and so on. It is widely known that the width of the Gaussian kernel in these machine learning techniques plays an important role. Determination of the optimal width is a time-consuming task. Therefore, it is preferable to determine the width with a simple manner. In this paper, we first examine a simple estimate of the width proposed by Nakayama et al. Through the examination, four sufficient conditions for the simple estimate of the width are described. Then, a new simple estimate for the width is proposed. In order to obtain the proposed estimate of the width, all dimensions are equally scaled. A simple technique called the adaptive scaling technique is also developed. It is expected that the proposed simple method to estimate the width is applicable to wide range of machine learning techniques employing the Gaussian kernel. Through examples, the validity of the proposed simple method to estimate the width is examined. © 2011 Elsevier B.V. All rights reserved

    Kernel Width Optimization for Faulty RBF Neural Networks with Multi-node Open Fault

    No full text
    Many researches have been devoted to select the kernel parameters, including the centers, kernel width and weights, for fault-free radial basis function (RBF) neural networks. However, most are concerned with the centers and weights identification, and fewer focus on the kernel width selection. Moreover, to our knowledge, almost no literature has proposed the effective and applied method to select the optimal kernel width for faulty RBF neural networks. As is known that the node faults inevitably take place in real applications, which results in a great many of faulty networks, it will take a lot of time to calculate the mean prediction error (MPE) for the traditional method, i.e., the test set method. Thus, the letter derives a formula to estimate the MPE of each candidate width value and then use it to select the optimal one with the lowest MPE value for faulty RBF neural networks with multi-node open fault. Simulation results show that the chosen optimal kernel width by our proposed MPE formula is very close to the actual one by the conventional method. Moreover, our proposed MPE formula outperforms other selection methods used for fault-free neural networks
    corecore