646 research outputs found

    An Improved Binary Grey-Wolf Optimizer with Simulated Annealing for Feature Selection

    Get PDF
    This paper proposes improvements to the binary grey-wolf optimizer (BGWO) to solve the feature selection (FS) problem associated with high data dimensionality, irrelevant, noisy, and redundant data that will then allow machine learning algorithms to attain better classification/clustering accuracy in less training time. We propose three variants of BGWO in addition to the standard variant, applying different transfer functions to tackle the FS problem. Because BGWO generates continuous values and FS needs discrete values, a number of V-shaped, S-shaped, and U-shaped transfer functions were investigated for incorporation with BGWO to convert their continuous values to binary. After investigation, we note that the performance of BGWO is affected by the selection of the transfer function. Then, in the first variant, we look to reduce the local minima problem by integrating an exploration capability to update the position of the grey wolf randomly within the search space with a certain probability; this variant was abbreviated as IBGWO. Consequently, a novel mutation strategy is proposed to select a number of the worst grey wolves in the population which are updated toward the best solution and randomly within the search space based on a certain probability to determine if the update is either toward the best or randomly. The number of the worst grey wolf selected by this strategy is linearly increased with the iteration. Finally, this strategy is combined with IBGWO to produce the second variant of BGWO that was abbreviated as LIBGWO. In the last variant, simulated annealing (SA) was integrated with LIBGWO to search around the best-so-far solution at the end of each iteration in order to identify better solutions. The performance of the proposed variants was validated on 32 datasets taken from the UCI repository and compared with six wrapper feature selection methods. The experiments show the superiority of the proposed improved variants in producing better classification accuracy than the other selected wrapper feature selection algorithms

    A hybrid Grey Wolf optimizer with multi-population differential evolution for global optimization problems

    Get PDF
    The optimization field is the process of solving an optimization problem using an optimization algorithm. Therefore, studying this research field requires to study both of optimization problems and algorithms. In this paper, a hybrid optimization algorithm based on differential evolution (DE) and grey wolf optimizer (GWO) is proposed. The proposed algorithm which is called “MDE-GWONM” is better than the original versions in terms of the balancing between exploration and exploitation. The results of implementing MDE-GWONM over nine benchmark test functions showed the performance is superior as compared to other stat of arts optimization algorithm

    4E analysis of a two-stage refrigeration system through surrogate models based on response surface methods and hybrid grey wolf optimizer

    Get PDF
    Refrigeration systems are complex, non-linear, multi-modal, and multi-dimensional. However, traditional methods are based on a trial and error process to optimize these systems, and a global optimum operating point cannot be guaranteed. Therefore, this work aims to study a two-stage vapor compression refrigeration system (VCRS) through a novel and robust hybrid multi-objective grey wolf optimizer (HMOGWO) algorithm. The system is modeled using response surface methods (RSM) to investigate the impacts of design variables on the set responses. Firstly, the interaction between the system components and their cycle behavior is analyzed by building four surrogate models using RSM. The model fit statistics indicate that they are statistically significant and agree with the design data. Three conflicting scenarios in bi-objective optimization are built focusing on the overall system following the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP) decision-making methods. The optimal solutions indicate that for the first to third scenarios, the exergetic efficiency (EE) and capital expenditure (CAPEX) are optimized by 33.4% and 7.5%, and the EE and operational expenditure (OPEX) are improved by 27.4% and 19.0%. The EE and global warming potential (GWP) are also optimized by 27.2% and 19.1%, where the proposed HMOGWO outperforms the MOGWO and NSGA-II. Finally, the K-means clustering technique is applied for Pareto characterization. Based on the research outcomes, the combined RSM and HMOGWO techniques have proved an excellent solution to simulate and optimize two-stage VCRS

    Evolving CNN-LSTM Models for Time Series Prediction Using Enhanced Grey Wolf Optimizer

    Get PDF
    In this research, we propose an enhanced Grey Wolf Optimizer (GWO) for designing the evolving Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) networks for time series analysis. To overcome the probability of stagnation at local optima and a slow convergence rate of the classical GWO algorithm, the newly proposed variant incorporates four distinctive search mechanisms. They comprise a nonlinear exploration scheme for dynamic search territory adjustment, a chaotic leadership dispatching strategy among the dominant wolves, a rectified spiral local exploitation action, as well as probability distribution-based leader enhancement. The evolving CNN-LSTM models are subsequently devised using the proposed GWO variant, where the network topology and learning hyperparameters are optimized for time series prediction and classification tasks. Evaluated using a number of benchmark problems, the proposed GWO-optimized CNN-LSTM models produce statistically significant results over those from several classical search methods and advanced GWO and Particle Swarm Optimization variants. Comparing with the baseline methods, the CNN-LSTM networks devised by the proposed GWO variant offer better representational capacities to not only capture the vital feature interactions, but also encapsulate the sophisticated dependencies in complex temporal contexts for undertaking time-series tasks

    Employee Attrition Prediction based on Grey Wolf Optimization and Deep Neural Networks

    Get PDF
    Despite the constructive application of promising technologies such as Neural Networks, their potential for predicting human resource management outcomes still needs to be explored. Therefore, the primary aim of this paper is to utilize neural networks and meta-heuristic technologies to predict employee attrition, thereby enhancing prediction model performance. The conventional Grey Wolf optimization optimization (GWO) has gained substantial attention notice because of its attributes of robust convergence, minimal parameters, and simple implementaton. However, it encounter problems with slow convergence rates and susceptibility to local optima in practical optimization scenarios. To address these problems, this paper introduces an enhanced Grey Wolf Optimization algorithm incorporating the utilization of Cauchy-Gaussian mutation, which contributes to enhancing diversity within the leader wolf population and enhances the algorithm's global search capabilities. Additionally, this work preserves exceptional grey wolf individuals through a greedy selection of 2 mechanisms to ensure accelerated convergence. Moreover, an enhanced exploration strategy is suggested to expand the optimization possibilities of the algorithm and improve its convergence speed. The results shows that the proposed model achieved the accuarcy of 97.85%, precision of  98.45%, recall of 98.14%, and f1-score of 97.11%. Nevertheless, this paper extends its scope beyond merely predicting employee attrition probability and activities to enhance the precision of such predictions by constructing an improved model employing a Deep Neural Network (DNN).

    Improved feature selection using a hybrid side-blotched lizard algorithm and genetic algorithm approach

    Get PDF
    Feature selection entails choosing the significant features among a wide collection of original features that are essential for predicting test data using a classifier. Feature selection is commonly used in various applications, such as bioinformatics, data mining, and the analysis of written texts, where the dataset contains tens or hundreds of thousands of features, making it difficult to analyze such a large feature set. Removing irrelevant features improves the predictor performance, making it more accurate and cost-effective. In this research, a novel hybrid technique is presented for feature selection that aims to enhance classification accuracy. A hybrid binary version of side-blotched lizard algorithm (SBLA) with genetic algorithm (GA), namely SBLAGA, which combines the strengths of both algorithms is proposed. We use a sigmoid function to adapt the continuous variables values into a binary one, and evaluate our proposed algorithm on twenty-three standard benchmark datasets. Average classification accuracy, average number of selected features and average fitness value were the evaluation criteria. According to the experimental results, SBLAGA demonstrated superior performance compared to SBLA and GA with regards to these criteria. We further compare SBLAGA with four wrapper feature selection methods that are widely used in the literature, and find it to be more efficient

    Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution

    Get PDF
    Task scheduling is one of the most significant challenges in the cloud computing environment and has attracted the attention of various researchers over the last decades, in order to achieve cost-effective execution and improve resource utilization. The challenge of task scheduling is categorized as a nondeterministic polynomial time (NP)-hard problem, which cannot be tackled with the classical methods, due to their inability to find a near-optimal solution within a reasonable time. Therefore, metaheuristic algorithms have recently been employed to overcome this problem, but these algorithms still suffer from falling into a local minima and from a low convergence speed. Therefore, in this study, a new task scheduler, known as hybrid differential evolution (HDE), is presented as a solution to the challenge of task scheduling in the cloud computing environment. This scheduler is based on two proposed enhancements to the traditional differential evolution. The first improvement is based on improving the scaling factor, to include numerical values generated dynamically and based on the current iteration, in order to improve both the exploration and exploitation operators; the second improvement is intended to improve the exploitation operator of the classical DE, in order to achieve better results in fewer iterations. Multiple tests utilizing randomly generated datasets and the CloudSim simulator were conducted, to demonstrate the efficacy of HDE. In addition, HDE was compared to a variety of heuristic and metaheuristic algorithms, including the slime mold algorithm (SMA), equilibrium optimizer (EO), sine cosine algorithm (SCA), whale optimization algorithm (WOA), grey wolf optimizer (GWO), classical DE, first come first served (FCFS), round robin (RR) algorithm, and shortest job first (SJF) scheduler. During trials, makespan and total execution time values were acquired for various task sizes, ranging from 100 to 3000. Compared to the other metaheuristic and heuristic algorithms considered, the results of the studies indicated that HDE generated superior outcomes. Consequently, HDE was found to be the most efficient metaheuristic scheduling algorithm among the numerous methods researched
    corecore