5 research outputs found
Extraction of similarity based fuzzy rules from artificial neural networks
A method to extract a fuzzy rule based system from a trained artificial neural network for classification
is presented. The fuzzy system obtained is equivalent to the corresponding neural network.
In the antecedents of the fuzzy rules, it uses the similarity between the input datum and the weight
vectors. This implies rules highly understandable. Thus, both the fuzzy system and a simple analysis
of the weight vectors are enough to discern the hidden knowledge learnt by the neural network. Several
classification problems are presented to illustrate this method of knowledge discovery by using
artificial neural networks
Data Mining with Enhanced Neural Networks-CMMSE
Abstract This paper presents a new method to extract knowledge from existing data sets, that is, to extract symbolic rules using the weights of an Artificial Neural
Network. The method has been applied to a neural network with special architecture named Enhanced Neural Network (ENN). This architecture improves the results that have been obtained with multilayer perceptron (MLP). The relationship among the knowledge stored in the weights, the performance of the network and the new implemented algorithm to acquire rules from the weights is explained. The method itself gives a model to follow in the knowledge acquisition with ENN
An Adaptive Fuzzy Min-Max Neural Network Classifier Based on Principle Component Analysis and Adaptive Genetic Algorithm
A novel adaptive fuzzy min-max neural network classifier called AFMN is proposed in this paper. Combined with principle component analysis and adaptive genetic algorithm, this integrated system can serve as a supervised and real-time classification technique. Considering the loophole in the expansion-contraction process of FMNN and GFMN and the overcomplex network architecture of FMCN, AFMN maintains the simple architecture of FMNN for fast learning and testing while rewriting the membership function, the expansion and contraction rules for hyperbox generation to solve the confusion problems in the hyperbox overlap region. Meanwhile, principle component analysis is adopted to finish dataset dimensionality reduction for increasing learning efficiency. After training, the confidence coefficient of each hyperbox is calculated based on the distribution of samples. During classifying procedure, utilizing adaptive genetic algorithm to complete parameter optimization for AFMN can also fasten the entire procedure than traversal method. For conditions where training samples are insufficient, data core weight updating is indispensible to enhance the robustness of classifier and the modified membership function can adjust itself according to the input varieties. The paper demonstrates the performance of AFMN through substantial examples in terms of classification accuracy and operating speed by comparing it with FMNN, GFMN, and FMCN
Are Artificial Neural Networks White Boxes?
We introduce a novel Mamdani-type fuzzy model, referred to as the all-permutations fuzzy rule-base, and show that it is mathematically equivalent to a standard feedforward neural network. We describe several applications of this equivalence between a neural network and our fuzzy rule base, including knowledge extraction from and knowledge insertion into neural networks