60 research outputs found
Variable-length particle swarm optimization for feature selection on high-dimensional classification
With a global search mechanism, particle swarm optimization (PSO) has shown promise in feature selection (FS). However, most of the current PSO-based FS methods use a fix-length representation, which is inflexible and limits the performance of PSO for FS. When applying these methods to high-dimensional data, it not only consumes a significant amount of memory but also requires a high computational cost. Overcoming this limitation enables PSO to work on data with much higher dimensionality which has become more and more popular with the advance of data collection technologies. In this paper, we propose the first variable-length PSO representation for FS, enabling particles to have different and shorter lengths, which defines smaller search space and therefore, improves the performance of PSO. By rearranging features in a descending order of their relevance, we facilitate particles with shorter lengths to achieve better classification performance. Furthermore, using the proposed length changing mechanism, PSO can jump out of local optima, further narrow the search space and focus its search on smaller and more fruitful area. These strategies enable PSO to reach better solutions in a shorter time. Results on ten high-dimensional datasets with varying difficulties show that the proposed variable-length PSO can achieve much smaller feature subsets with significantly higher classification performance in much shorter time than the fixed-length PSO methods. The proposed method also outperformed the compared non-PSO FS methods in most cases
A non-specialized ensemble classifier using multi-objective optimization
Ensemble classification algorithms are often designed for data with certain properties, such as imbalanced class labels, a large number of attributes, or continuous data. While high-performing, these algorithms sacrifice performance when applied to data outside the targeted domain. We propose a non-specific ensemble classification algorithm that uses multi-objective optimization instead of relying on heuristics and fragile user-defined parameters. Only two user-defined parameters are included, with both being found to have large windows of values that produce statistically indistinguishable results, indicating the low level of expertise required from the user to achieve good results. Additionally, when given a large initial set of trained base-classifiers, we demonstrate that a multi-objective genetic algorithm aiming to optimize prediction accuracy and diversity will prefer particular types of classifiers over others. The total number of chosen classifiers is also surprisingly small – only 10.14 classifiers on average, out of an initial pool of 900. This occurs without any explicit preference for small ensembles of classifiers. Even with these small ensembles, significantly lower empirical classification error is achieved compared to the current state-of-the-art. © 2020 Elsevier B.V
Variable-length particle swarm optimization for feature selection on high-dimensional classification
With a global search mechanism, particle swarm optimization (PSO) has shown promise in feature selection (FS). However, most of the current PSO-based FS methods use a fix-length representation, which is inflexible and limits the performance of PSO for FS. When applying these methods to high-dimensional data, it not only consumes a significant amount of memory but also requires a high computational cost. Overcoming this limitation enables PSO to work on data with much higher dimensionality which has become more and more popular with the advance of data collection technologies. In this paper, we propose the first variable-length PSO representation for FS, enabling particles to have different and shorter lengths, which defines smaller search space and therefore, improves the performance of PSO. By rearranging features in a descending order of their relevance, we facilitate particles with shorter lengths to achieve better classification performance. Furthermore, using the proposed length changing mechanism, PSO can jump out of local optima, further narrow the search space and focus its search on smaller and more fruitful area. These strategies enable PSO to reach better solutions in a shorter time. Results on ten high-dimensional datasets with varying difficulties show that the proposed variable-length PSO can achieve much smaller feature subsets with significantly higher classification performance in much shorter time than the fixed-length PSO methods. The proposed method also outperformed the compared non-PSO FS methods in most cases
Genetic Programming for Manifold Learning: Preserving Local Topology
No description supplie
Evolving Scheduling Heuristics via Genetic Programming with Feature Selection in Dynamic Flexible Job Shop Scheduling
No description supplie
Evolving Scheduling Heuristics via Genetic Programming With Feature Selection in Dynamic Flexible Job-Shop Scheduling
No description supplied.</p
Evolving Scheduling Heuristics via Genetic Programming With Feature Selection in Dynamic Flexible Job-Shop Scheduling
No description supplied.</p
- …
