15,631 research outputs found
Hybrid ACO and SVM algorithm for pattern classification
Ant Colony Optimization (ACO) is a metaheuristic algorithm that can be used to
solve a variety of combinatorial optimization problems. A new direction for ACO is to optimize continuous and mixed (discrete and continuous) variables. Support Vector Machine (SVM) is a pattern classification approach originated from statistical approaches. However, SVM suffers two main problems which include feature subset selection and parameter tuning. Most approaches related to tuning SVM parameters discretize the continuous value of the parameters which will give a negative effect on the classification performance. This study presents four algorithms for tuning the
SVM parameters and selecting feature subset which improved SVM classification accuracy with smaller size of feature subset. This is achieved by performing the SVM parameters’ tuning and feature subset selection processes simultaneously. Hybridization algorithms between ACO and SVM techniques were proposed. The first two algorithms, ACOR-SVM and IACOR-SVM, tune the SVM parameters while
the second two algorithms, ACOMV-R-SVM and IACOMV-R-SVM, tune the SVM parameters and select the feature subset simultaneously. Ten benchmark datasets from University of California, Irvine, were used in the experiments to validate the performance of the proposed algorithms. Experimental results obtained from the proposed algorithms are better when compared with other approaches in terms of classification accuracy and size of the feature subset. The average classification
accuracies for the ACOR-SVM, IACOR-SVM, ACOMV-R and IACOMV-R algorithms are 94.73%, 95.86%, 97.37% and 98.1% respectively. The average size of feature subset is eight for the ACOR-SVM and IACOR-SVM algorithms and four for the ACOMV-R and IACOMV-R algorithms. This study contributes to a new direction for ACO that can deal with continuous and mixed-variable ACO
Finding Optimal Diverse Feature Sets with Alternative Feature Selection
Feature selection is popular for obtaining small, interpretable, yet highly accurate prediction models. Conventional feature-selection methods typically yield one feature set only, which might not suffice in some scenarios. For example, users might be interested in finding alternative feature sets with similar prediction quality, offering different explanations of the data. In this article, we introduce alternative feature selection and formalize it as an optimization problem. In particular, we define alternatives via constraints and enable users to control the number and dissimilarity of alternatives. Next, we analyze the complexity of this optimization problem and show NP-hardness. Further, we discuss how to integrate conventional feature-selection methods as objectives. Finally, we evaluate alternative feature selection with 30 classification datasets. We observe that alternative feature sets may indeed have high prediction quality, and we analyze several factors influencing this outcome
Finding Optimal Diverse Feature Sets with Alternative Feature Selection
Feature selection is popular for obtaining small, interpretable, yet highly
accurate prediction models. Conventional feature-selection methods typically
yield one feature set only, which might not suffice in some scenarios. For
example, users might be interested in finding alternative feature sets with
similar prediction quality, offering different explanations of the data. In
this article, we introduce alternative feature selection and formalize it as an
optimization problem. In particular, we define alternatives via constraints and
enable users to control the number and dissimilarity of alternatives. Next, we
analyze the complexity of this optimization problem and show NP-hardness.
Further, we discuss how to integrate conventional feature-selection methods as
objectives. Finally, we evaluate alternative feature selection with 30
classification datasets. We observe that alternative feature sets may indeed
have high prediction quality, and we analyze several factors influencing this
outcome
Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm
Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems.
We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort
Current Studies and Applications of Krill Herd and Gravitational Search Algorithms in Healthcare
Nature-Inspired Computing or NIC for short is a relatively young field that
tries to discover fresh methods of computing by researching how natural
phenomena function to find solutions to complicated issues in many contexts. As
a consequence of this, ground-breaking research has been conducted in a variety
of domains, including synthetic immune functions, neural networks, the
intelligence of swarm, as well as computing of evolutionary. In the domains of
biology, physics, engineering, economics, and management, NIC techniques are
used. In real-world classification, optimization, forecasting, and clustering,
as well as engineering and science issues, meta-heuristics algorithms are
successful, efficient, and resilient. There are two active NIC patterns: the
gravitational search algorithm and the Krill herd algorithm. The study on using
the Krill Herd Algorithm (KH) and the Gravitational Search Algorithm (GSA) in
medicine and healthcare is given a worldwide and historical review in this
publication. Comprehensive surveys have been conducted on some other
nature-inspired algorithms, including KH and GSA. The various versions of the
KH and GSA algorithms and their applications in healthcare are thoroughly
reviewed in the present article. Nonetheless, no survey research on KH and GSA
in the healthcare field has been undertaken. As a result, this work conducts a
thorough review of KH and GSA to assist researchers in using them in diverse
domains or hybridizing them with other popular algorithms. It also provides an
in-depth examination of the KH and GSA in terms of application, modification,
and hybridization. It is important to note that the goal of the study is to
offer a viewpoint on GSA with KH, particularly for academics interested in
investigating the capabilities and performance of the algorithm in the
healthcare and medical domains.Comment: 35 page
- …