227 research outputs found

    Training data selection method for generalization by multilayer neural networks

    Get PDF
    金沢大学大学院自然科学研究科知能情報・数理A training data selection method is proposed for multilayer neural networks (MLNNs). This method selects a small number of the training data, which guarantee both generalization and fast training of the MLNNs applied to pattern classification. The generalization will be satisfied using the data locate close to the boundary of the pattern classes. However, if these data are only used in the training, convergence is slow. This phenomenon is analyzed in this paper. Therefore, in the proposed method, the MLNN is first trained using some number of the data, which are randomly selected (Step 1). The data, for which the output error is relatively large, are selected. Furthermore, they are paired with the nearest data belong to the different class. The newly selected data are further paired with the nearest data. Finally, pairs of the data, which locate close to the boundary, can be found. Using these pairs of the data, the MLNNs are further trained (Step 2). Since, there are some variations to combine Steps 1 and 2, the proposed method can be applied to both off-line and on-line training. The proposed method can reduce the number of the training data, at the same time, can hasten the training. Usefulness is confirmed through computer simulation

    Comparison of activation functions in multilayer neural network for pattern classification

    Get PDF
    金沢大学理工研究域 電子情報学

    Multi-frequency signal classification by multilayer neural networks and linear filter methods

    Get PDF
    金沢大学理工研究域 電子情報学系This paper compares signal classification performance of multilayer neural networks (MLNNs) and linear filters (LFs). The MLNNs are useful for arbitrary waveform signal classification. On the other hand, LFs are useful for the signals, which are specified with frequency components. In this paper, both methods are compared based on frequency selective performance. The signals to be classified contain several frequency components. Furthermore, effects of the number of the signal samples are investigated. In this case, the frequency information may be lost to some extent. This makes the classification problems difficult. From practical viewpoint, computational complexity is also limited to the same level in both methods. IIR and FIR filters are compared. FIR filters with a direct form can save computations, which is independent of the filter order. IIR filters, on the other hand, cannot provide good signal classification due to their phase distortion, and require a large amount of computations due to their recursive structure. When the number of the input samples is strictly limited, the signal vectors are widely distributed in the multi-dimensional signal space. In this case, signal classification by the LF method cannot provide a good performance. Because, they are designed to extract the frequency components. On the other hand, the MLNN method can form class regions in the signal vector space with high degree of freedom. When the number of the signal samples is not so limited, both the MLNN and LF methods can provide the same high classification rates. In this case, since the signal vectors are distributed in the specific region, the MLNN method has some convergence problem, that is local minimum problem. The initial weights should be carefully determined around the optimum solution. Another point is robustness for noisy signal. The LFs can suppress wide-band noise by using very high-Q filters. However, the MLNN method can be also robust. Rather, it is a little superior to the LF method when the computational load is limited

    Classification of multi-frequency signals with random noise using multilayer neural networks

    Get PDF
    Frequency analysis capability of multilayer neural networks, trained by back-propagation (BP) algorithm is investigated. Multi-frequency signal classification is taken into account for this purpose. The number of frequency sets, that is signal groups, is 2approx.5, and the number of frequencies included in a signal group is 3approx.5. The frequencies are alternately located among the signal groups. Through computer simulation, it has been confirmed that the neural network has very high resolution. Classification rates are about 99.5% for training signals, and 99.0% for untraining signals. The results are compared with conventional methods, including Euclidean distance with accuracy of about 65%, Fourier transform with accuracy of about 10approx.30%, and using very high-Q filters with a huge number of computations. The neural network requires only the same number of inner products as the hidden units. Frequency sensitivity and robustness for the random noise are studied. The networks show high frequency sensitivity, namely, the networks have high frequency resolution. Random noise are added to the multi-frequency signals to investigate how does the network cancel uncorrelated noise among the signals. By increasing the number of samples, or training signals, effects of random noise can be cancelled

    A training data selection in on-line training for multilayer neural networks

    Get PDF
    金沢大学大学院自然科学研究科情報システムIn this paper, a training data selection method for multilayer neural networks (MLNNs) in on-line training is proposed. Purpose of the reduction in training data is reducing the computation complexity of the training and saving the memory to store the data without losing generalization performance. This method uses a pairing method, which selects the nearest neighbor data by finding the nearest data in the different classes. The network is trained by the selected data. Since the selected data located along data class boundary, the trained network can guarantee generalization performance. Efficiency of this method for the on-line training is evaluated by computer simulation

    Comparison of activation functions in multilayer neural network for pattern classification

    Get PDF
    This paper discusses properties of activation functions in multilayer neural network applied to pattern classification. A rule of thumb for selecting activation functions or their combination is proposed. The sigmoid, Gaussian and sinusoidal functions are selected due to their independent and fundamental space division properties. The sigmoid function is not effective for a single hidden unit. On the contrary, the other functions can provide good performance. When several hidden units are employed, the sigmoid function is useful. However, the convergence speed is still slower than the others. The Gaussian function is sensitive to the additive noise, while the others are rather insensitive. As a result, based on convergence rates, the minimum error and noise sensitivity, the sinusoidal function is most useful for both without and with additive noise. Property of each function is discussed based on the internal representation, that is the distributions of the hidden unit inputs and outputs. Although this selection depends on the input signals to be classified, the periodic function can be effectively applied to a wide range of application fields
    corecore