33 research outputs found

    Feature selection for sky image classification based on self adaptive ant colony system algorithm

    Get PDF
    Statistical-based feature extraction has been typically used to purpose obtaining the important features from the sky image for cloud classification. These features come up with many kinds of noise, redundant and irrelevant features which can influence the classification accuracy and be time consuming. Thus, this paper proposed a new feature selection algorithm to distinguish significant features from the extracted features using an ant colony system (ACS). The informative features are extracted from the sky images using a Gaussian smoothness standard deviation, and then represented in a directed graph. In feature selection phase, the self-adaptive ACS (SAACS) algorithm has been improved by enhancing the exploration mechanism to select only the significant features. Support vector machine, kernel support vector machine, multilayer perceptron, random forest, k-nearest neighbor, and decision tree were used to evaluate the algorithms. Four datasets are used to test the proposed model: Kiel, Singapore whole-sky imaging categories, MGC Diagnostics Corporation, and greatest common divisor. The SAACS algorithm is compared with six bio-inspired benchmark feature selection algorithms. The SAACS algorithm achieved classification accuracy of 95.64% that is superior to all the benchmark feature selection algorithms. Additionally, the Friedman test and Mann-Whitney U test are employed to statistically evaluate the efficiency of the proposed algorithms

    Binary Multi-Verse Optimization (BMVO) Approaches for Feature Selection

    Get PDF
    Multi-Verse Optimization (MVO) is one of the newest meta-heuristic optimization algorithms which imitates the theory of Multi-Verse in Physics and resembles the interaction among the various universes. In problem domains like feature selection, the solutions are often constrained to the binary values viz. 0 and 1. With regard to this, in this paper, binary versions of MVO algorithm have been proposed with two prime aims: firstly, to remove redundant and irrelevant features from the dataset and secondly, to achieve better classification accuracy. The proposed binary versions use the concept of transformation functions for the mapping of a continuous version of the MVO algorithm to its binary versions. For carrying out the experiments, 21 diverse datasets have been used to compare the Binary MVO (BMVO) with some binary versions of existing metaheuristic algorithms. It has been observed that the proposed BMVO approaches have outperformed in terms of a number of features selected and the accuracy of the classification process

    A novel approach for estimation of above-ground biomass of sugar beet based on wavelength selection and optimized support vector machine

    Get PDF
    Timely diagnosis of sugar beet above-ground biomass (AGB) is critical for the prediction of yield and optimal precision crop management. This study established an optimal quantitative prediction model of AGB of sugar beet by using hyperspectral data. Three experiment campaigns in 2014, 2015 and 2018 were conducted to collect ground-based hyperspectral data at three different growth stages, across different sites, for different cultivars and nitrogen (N) application rates. A competitive adaptive reweighted sampling (CARS) algorithm was applied to select the most sensitive wavelengths to AGB. This was followed by developing a novel modified differential evolution grey wolf optimization algorithm (MDE-GWO) by introducing differential evolution algorithm (DE) and dynamic non-linear convergence factor to grey wolf optimization algorithm (GWO) to optimize the parameters c and gamma of a support vector machine (SVM) model for the prediction of AGB. The prediction performance of SVM models under the three GWO, DE-GWO and MDE-GWO optimization methods for CARS selected wavelengths and whole spectral data was examined. Results showed that CARS resulted in a huge wavelength reduction of 97.4% for the rapid growth stage of leaf cluster, 97.2% for the sugar growth stage and 97.4% for the sugar accumulation stage. Models resulted after CARS wavelength selection were found to be more accurate than models developed using the entire spectral data. The best prediction accuracy was achieved after the MDE-GWO optimization of SVM model parameters for the prediction of AGB in sugar beet, independent of growing stage, years, sites and cultivars. The best coefficient of determination (R-2), root mean square error (RMSE) and residual prediction deviation (RPD) ranged, respectively, from 0.74 to 0.80, 46.17 to 65.68 g/m(2) and 1.42 to 1.97 for the rapid growth stage of leaf cluster, 0.78 to 0.80, 30.16 to 37.03 g/m(2) and 1.69 to 2.03 for the sugar growth stage, and 0.69 to 0.74, 40.17 to 104.08 g/m(2) and 1.61 to 1.95 for the sugar accumulation stage. It can be concluded that the methodology proposed can be implemented for the prediction of AGB of sugar beet using proximal hyperspectral sensors under a wide range of environmental conditions

    Feature extraction and selection algorithm based on self adaptive ant colony system for sky image classification

    Get PDF
    Sky image classification is crucial in meteorology to forecast weather and climatic conditions. The fine-grained cloud detection and recognition (FGCDR) algorithm is use to extract colour, inside texture and neighbour texture features from multiview of superpixels sky images. However, the FGCDR produced a substantial amount of redundant and insignificant features. The ant colony optimisation (ACO) algorithm have been used to select feature subset. However, the ACO suffers from premature convergence which leads to poor feature subset. Therefore, an improved feature extraction and selection for sky image classification (FESSIC) algorithm is proposed. This algorithm consists of (i) Gaussian smoothness standard deviation method that formulates informative features within sky images; (ii) nearest-threshold based technique that converts feature map into a weighted directed graph to represent relationship between features; and (iii) an ant colony system with self-adaptive parameter technique for local pheromone update. The performance of FESSIC was evaluated against ten benchmark image classification algorithms and six classifiers on four ground-based sky image datasets. The Friedman test result is presented for the performance rank of six benchmark feature selection algorithms and FESSIC algorithm. The Man-Whitney U test is then performed to statistically evaluate the significance difference of the second rank and FESSIC algorithms. The experimental results for the proposed algorithm are superior to the benchmark image classification algorithms in terms of similarity value on Kiel, SWIMCAT and MGCD datasets. FESSIC outperforms other algorithms for average classification accuracy for the KSVM, MLP, RF and DT classifiers. The Friedman test has shown that the FESSIC has the first rank for all classifiers. Furthermore, the result of Man-Whitney U test indicates that FESSIC is significantly better than the second rank benchmark algorithm for all classifiers. In conclusion, the FESSIC can be utilised for image classification in various applications such as disaster management, medical diagnosis, industrial inspection, sports management, and content-based image retrieval

    Omega grey wolf optimizer (ωGWO) for optimization of overcurrent relays coordination with distributed generation

    Get PDF
    Inverse definite minimum time (IDMT) overcurrent relays (OCRs) are among protective devices installed in electrical power distribution networks. The devices are used to detect and isolate the faulty area from the system in order to maintain the reliability and availability of the electrical supply during contingency condition. The overall protection coordination is thus very complicated and could not be satisfied using the conventional method moreover for the modern distribution system. This thesis apply a meta-heuristic algorithm called Grey Wolf Optimizer (GWO) to minimize the overcurrent relays operating time while fulfilling the inequality constraints. GWO is inspired by the hunting behavior of the grey wolf which have firm social dominant hierarchy. Comparative studies have been performed in between GWO and the other well-known methods such as Differential Evolution (DE), Particle Swarm Optimizer (PSO) and Biogeographybased Optimizer (BBO), to demonstrate the efficiency of the GWO. The study is resumed with an improvement to the original GWO’s exploration formula named as Omega-GWO (ωGWO) to enhance the hunting ability. The ωGWO is then implemented to the realdistribution network with the distributed generation (DG) in order to investigate the drawbacks of the DG insertion towards the original overcurrent relays configuration setting. The GWO algorithm is tested to four different test cases which are IEEE 3 bus (consists of six OCRs), IEEE 8 bus (consists of 14 OCRs), 9 bus (consists of 24 OCRs) and IEEE 15 bus (consists of 42 OCRs) test systems with normal inverse (NI) characteristic curve for all test cases and very inverse (VI) curve for selected cases to test the flexibility of the GWO algorithm. The real-distribution network in Malaysia which originally without DG is chosen, to investigate and recommend the optimal DG placement that have least negative impact towards the original overcurrent coordination setting. The simulation results from this study has established that GWO is able to produce promising solutions by generating the lowest operating time among other reviewed algorithms. The superiority of the GWO algorithm is proven with relays’ operational time are reduced for about 0.09 seconds and 0.46 seconds as compared to DE and PSO respectively. In addition, the computational time of the GWO algorithm is faster than DE and PSO with the respective reduced time is 23 seconds and 37 seconds. In Moreover, the robustness of GWO algorithm is establish with low standard deviation of 1.7142 seconds as compared to BBO. The ωGWO has shown an improvement for about 55% and 19% compared to other improved and hybrid method of GA-NLP and PSO-LP respectively and 0.7% reduction in relays operating time compared to the original GWO. The investigation to the DG integration has disclosed that the scheme is robust and appropriate to be implemented for future system operational and topology revolutions

    Feature selection using enhanced particle swarm optimisation for classification models.

    Get PDF
    In this research, we propose two Particle Swarm Optimisation (PSO) variants to undertake feature selection tasks. The aim is to overcome two major shortcomings of the original PSO model, i.e., premature convergence and weak exploitation around the near optimal solutions. The first proposed PSO variant incorporates four key operations, including a modified PSO operation with rectified personal and global best signals, spiral search based local exploitation, Gaussian distribution-based swarm leader enhancement, and mirroring and mutation operations for worst solution improvement. The second proposed PSO model enhances the first one through four new strategies, i.e., an adaptive exemplar breeding mechanism incorporating multiple optimal signals, nonlinear function oriented search coefficients, exponential and scattering schemes for swarm leader, and worst solution enhancement, respectively. In comparison with a set of 15 classical and advanced search methods, the proposed models illustrate statistical superiority for discriminative feature selection for a total of 13 data sets

    A scattering and repulsive swarm intelligence algorithm for solving global optimization problems

    Get PDF
    The firefly algorithm (FA), as a metaheuristic search method, is useful for solving diverse optimization problems. However, it is challenging to use FA in tackling high dimensional optimization problems, and the random movement of FA has a high likelihood to be trapped in local optima. In this research, we propose three improved algorithms, i.e., Repulsive Firefly Algorithm (RFA), Scattering Repulsive Firefly Algorithm (SRFA), and Enhanced SRFA (ESRFA), to mitigate the premature convergence problem of the original FA model. RFA adopts a repulsive force strategy to accelerate fireflies (i.e. solutions) to move away from unpromising search regions, in order to reach global optimality in fewer iterations. SRFA employs a scattering mechanism along with the repulsive force strategy to divert weak neighbouring solutions to new search regions, in order to increase global exploration. Motivated by the survival tactics of hawk-moths, ESRFA incorporates a hovering-driven attractiveness operation, an exploration-driven evading mechanism, and a learning scheme based on the historical best experience in the neighbourhood to further enhance SRFA. Standard and CEC2014 benchmark optimization functions are used for evaluation of the proposed FA-based models. The empirical results indicate that ESRFA, SRFA and RFA significantly outperform the original FA model, a number of state-of-the-art FA variants, and other swarm-based algorithms, which include Simulated Annealing, Cuckoo Search, Particle Swarm, Bat Swarm, Dragonfly, and Ant-Lion Optimization, in diverse challenging benchmark functions

    Evolving CNN-LSTM Models for Time Series Prediction Using Enhanced Grey Wolf Optimizer

    Get PDF
    In this research, we propose an enhanced Grey Wolf Optimizer (GWO) for designing the evolving Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) networks for time series analysis. To overcome the probability of stagnation at local optima and a slow convergence rate of the classical GWO algorithm, the newly proposed variant incorporates four distinctive search mechanisms. They comprise a nonlinear exploration scheme for dynamic search territory adjustment, a chaotic leadership dispatching strategy among the dominant wolves, a rectified spiral local exploitation action, as well as probability distribution-based leader enhancement. The evolving CNN-LSTM models are subsequently devised using the proposed GWO variant, where the network topology and learning hyperparameters are optimized for time series prediction and classification tasks. Evaluated using a number of benchmark problems, the proposed GWO-optimized CNN-LSTM models produce statistically significant results over those from several classical search methods and advanced GWO and Particle Swarm Optimization variants. Comparing with the baseline methods, the CNN-LSTM networks devised by the proposed GWO variant offer better representational capacities to not only capture the vital feature interactions, but also encapsulate the sophisticated dependencies in complex temporal contexts for undertaking time-series tasks

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
    corecore