30 research outputs found

    Individual And Ensemble Pattern Classification Models Using Enhanced Fuzzy Min-Max Neural Networks

    Get PDF
    Pattern classification is one of the major components for the design and development of a computerized pattern recognition system. Focused on computational intelligence models, this thesis describes in-depth investigations on two possible directions to design robust and flexible pattern classification models with high performance. Firstly is by enhancing the learning algorithm of a neural-fuzzy network; and secondly by devising an ensemble model to combine the predictions from multiple neural-fuzzy networks using an agent-based framework. Owing to a number of salient features which include the ability of learning incrementally and establishing nonlinear decision boundary with hyperboxes, the Fuzzy Min-Max (FMM) network is selected as the backbone for designing useful and usable pattern classification models in this research. Two enhanced FMM variants, i.e. EFMM and EFMM2, are proposed to address a number of limitations in the original FMM learning algorithm. In EFMM, three heuristic rules are introduced to improve the hyperbox expansion, overlap test, and contraction processes. The network complexity and noise tolerance issues are undertaken in EFMM2. In addition, an agent-based framework is capitalized as a robust ensemble model to house multiple EFMM-based networks. A useful trust measurement method known as Certified Belief in Strength (CBS) is developed and incorporated into the ensemble model for exploiting the predictive performances of different EFMM-based networks

    An Effective Multi-Resolution Hierarchical Granular Representation based Classifier using General Fuzzy Min-Max Neural Network

    Full text link
    IEEE Motivated by the practical demands for simplification of data towards being consistent with human thinking and problem solving as well as tolerance of uncertainty, information granules are becoming important entities in data processing at different levels of data abstraction. This paper proposes a method to construct classifiers from multi-resolution hierarchical granular representations (MRHGRC) using hyperbox fuzzy sets. The proposed approach forms a series of granular inferences hierarchically through many levels of abstraction. An attractive characteristic of our classifier is that it can maintain a high accuracy in comparison to other fuzzy min-max models at a low degree of granularity based on reusing the knowledge learned from lower levels of abstraction. In addition, our approach can reduce the data size significantly as well as handle the uncertainty and incompleteness associated with data in real-world applications. The construction process of the classifier consists of two phases. The first phase is to formulate the model at the greatest level of granularity, while the later stage aims to reduce the complexity of the constructed model and deduce it from data at higher abstraction levels. Experimental analyses conducted comprehensively on both synthetic and real datasets indicated the efficiency of our method in terms of training time and predictive performance in comparison to other types of fuzzy min-max neural networks and common machine learning algorithms

    A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications

    Full text link
    This survey samples from the ever-growing family of adaptive resonance theory (ART) neural network models used to perform the three primary machine learning modalities, namely, unsupervised, supervised and reinforcement learning. It comprises a representative list from classic to modern ART models, thereby painting a general picture of the architectures developed by researchers over the past 30 years. The learning dynamics of these ART models are briefly described, and their distinctive characteristics such as code representation, long-term memory and corresponding geometric interpretation are discussed. Useful engineering properties of ART (speed, configurability, explainability, parallelization and hardware implementation) are examined along with current challenges. Finally, a compilation of online software libraries is provided. It is expected that this overview will be helpful to new and seasoned ART researchers

    A hybrid constructive algorithm incorporating teaching-learning based optimization for neural network training

    Get PDF
    In neural networks, simultaneous determination of the optimum structure and weights is a challenge. This paper proposes a combination of teaching-learning based optimization (TLBO) algorithm and a constructive algorithm (CA) to cope with the challenge. In literature, TLBO is used to choose proper weights, while CA is adopted to construct different structures in order to select the proper one. In this study, the basic TLBO algorithm along with an improved version of this algorithm for network weights selection are utilized. Meanwhile, as a constructive algorithm, a novel modification to multiple operations, using statistical tests (MOST), is applied and tested to choose the proper structure. The proposed combinatorial algorithms are applied to ten classification problems and two-time-series prediction problems, as the benchmark. The results are evaluated based on training and testing error, network complexity and mean-square error. The experimental results illustrate that the proposed hybrid method of the modified MOST constructive algorithm and the improved TLBO (MCO-ITLBO) algorithm outperform the others; moreover, they have been proven by Wilcoxon statistical tests as well. The proposed method demonstrates less average error with less complexity in the network structure

    On neurobiological, neuro-fuzzy, machine learning, and statistical pattern recognition techniques

    Full text link

    Design and analysis of rule induction systems

    Get PDF
    The RULES family of algorithms is reviewed in this work and the drawback of the variation in their generalisation performance is investigated. This results in a new data ordering method (DOM) for the RULES family of inductive learning algorithms. DOM is based on the selection of the most representative example; the method has been tested as a pre-processing stage for many data sets and has shown promising results. Another difficulty faced is the growing size of training data sets, which results in long algorithm execution times and less compact generated rules. In this study a new data sorting method (DSM) is developed for ordering the whole data set and reducing the training time. This is based on selecting relevant attributes and best possible examples to represent a data set. Finally, the order in which the raw data is introduced to the RULES family algorithms considerably affects the accuracy of the generated rules. This work presents a new data grouping method (DGM) to solve this problem, which is based on clustering. This method, in the form of an algorithm, is integrated into a data mining tool and applied to a real project; as a result, better variation in the classification percentage and a lower number of rules formed has been achieved

    A Multiobjective Optimization Approach for Market Timing

    Get PDF
    The introduction of electronic exchanges was a crucial point in history as it heralded the arrival of algorithmic trading. Designers of such systems face a number of issues, one of which is deciding when to buy or sell a given security on a financial market. Although Genetic Algorithms (GA) have been the most widely used to tackle this issue, Particle Swarm Optimization (PSO) has seen much lower adoption within the domain. In two previous works, the authors adapted PSO algorithms to tackle market timing and address the shortcomings of the previous approaches both with GA and PSO. The majority of work done to date on market timing tackled it as a single objective optimization problem, which limits its suitability to live trading as designers of such strategies will realistically pursue multiple objectives such as maximizing profits, minimizing exposure to risk and using the shortest strategies to improve execution speed. In this paper, we adapt both a GA and PSO to tackle market timing as a multiobjective optimization problem and provide an in depth discussion of our results and avenues of future research

    Pattern Classification by an Incremental Learning Fuzzy Neural Network

    Get PDF
    To detect and identify defects in machine condition health monitoring, classical neural classifiers, such as Multilayer Perceptron (MLP) neural networks, are proposed to supervise the monitored system. A drawback of classical neural classifiers, off-line and iterative learning algorithms, is a long training time. In addition, they are often stuck at local minima, unable to achieve the optimum solution. Furthennore, in an operating mode, it is possible that new faults are developing while a monitored system is running. These new classes of defects need to be instantly detected and distinguished from those that have been trained to the classifier. Those classical neural classifiers need to be retrained by both old and new patterns in order to learn new patterns without forgetting the learned patterns. Conventional classifiers cannot detect and learn the new fault types on-line real-time. Using incremental learning algorithms in the monitoring system it is possible to detect those new defects of machine conditions with the system operating while maintaining oLd knowledge. Inspired by the promising properties of an incremental learning algorithm named Fuzzy ARTMAP Neural Network, a new algorithm suitable for pattern classification based on fuzzy neural networks called an Incremental Learning Fuzzy Neuron Network (ILFN) is developed. The ILFN uses Gaussian neurons to represent the distributions of the input space, while the fuzzy ARTMAP neural network uses hyperboxes. The ILFN employs a hybrid supervised and unsupervised learning scheme to generate its prototypes. The network is a self-organized classifier with the capability of adaptive learning of new information without forgetting old knowledge. The classifier can detect new classes of patterns and update its parameters while in an operating mode. Moreover, it is an on-line (real-time) and fast learning algorithm without knowing a priori information. In addition, it has the capability to make soft (fuzzy) and hard (crisp) decisions, and.it is able to classify both linear separable and nonlinear separable problems. To prove the concept, simulations have been performed with the vibration data known as the Westland Data Set. This data set was obtained from the Internet at http://wisdom.ar1.psu.edulWestland/ collected from U.S. Navy CH-46E helicopters maintained by Applied Research Laboratory (ARL) at Penn State University. Using a simple Fast Fourier Transform (FFT) technique for the feature extraction part, the network, capable of one-pass, on-line, and incremental learning performed quite well. Training by various torque levels, the network achieved 100% correct prediction for the same torque level of testing data. Furthermore, the classification performance of the network has been tested using other benchmark data, such as the Fisher's Iris data, the two-spiral problem, and a vowel data set. Comparison studies among other well-known classifiers were preformed. The ILFN was found competitive with or even superior to many classifiers
    corecore