7,728 research outputs found
Cooperative co-evolution of GA-based classifiers based on input increments
Genetic algorithms (GAs) have been widely used as soft computing techniques in various
applications, while cooperative co-evolution algorithms were proposed in the literature to improve the
performance of basic GAs. In this paper, a new cooperative co-evolution algorithm, namely ECCGA, is
proposed in the application domain of pattern classification. Concurrent local and global evolution and
conclusive global evolution are proposed to improve further the classification performance. Different
approaches of ECCGA are evaluated on benchmark classification data sets, and the results show that
ECCGA can achieve better performance than the cooperative co-evolution genetic algorithm and normal GA.
Some analysis and discussions on ECCGA and possible improvement are also presented
On the design of an ECOC-compliant genetic algorithm
Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches
VEGAS: a variable length-based genetic algorithm for ensemble selection in deep ensemble learning.
In this study, we introduce an ensemble selection method for deep ensemble systems called VEGAS. The deep ensemble models include multiple layers of the ensemble of classifiers (EoC). At each layer, we train the EoC and generates training data for the next layer by concatenating the predictions for training observations and the original training data. The predictions of the classifiers in the last layer are combined by a combining method to obtain the final collaborated prediction. We further improve the prediction accuracy of a deep ensemble model by searching for its optimal configuration, i.e., the optimal set of classifiers in each layer. The optimal configuration is obtained using the Variable-Length Genetic Algorithm (VLGA) to maximize the prediction accuracy of the deep ensemble model on the validation set. We developed three operators of VLGA: roulette wheel selection for breeding, a chunk-based crossover based on the number of classifiers to generate new offsprings, and multiple random points-based mutation on each offspring. The experiments on 20 datasets show that VEGAS outperforms selected benchmark algorithms, including two well-known ensemble methods (Random Forest and XgBoost) and three deep learning methods (Multiple Layer Perceptron, gcForest, and MULES)
Feature selection for modular GA-based classification
Genetic algorithms (GAs) have been used as conventional methods for classifiers to adaptively evolve solutions for classification problems. Feature selection plays an important role in finding relevant features in classification. In this paper, feature selection is explored with modular GA-based classification. A new feature selection technique, Relative Importance Factor (RIF), is proposed to find less relevant features in the input domain of each class module. By removing these features, it is aimed to reduce the classification error and dimensionality of classification problems. Benchmark classification data sets are used to evaluate the proposed approach. The experiment results show that RIF can be used to find less relevant features and help achieve lower classification error with the feature space dimension reduced
- …