7 research outputs found

    Porównanie wybranych algorytmów wilczego stada stosowanych w rozwiązaniach problemów optymalizacji

    Get PDF
    Optimization algorithms have gained recognition as a fast and consistent way to solve optimization problems. Recently, wolves have been increasingly used as inspiration for algorithms as well as in projects using these algorithms. In this paper, six selected algorithms are described. They were then implemented in R and compared using six comparison functions, called benchmarks. The results of thirty tests on each function were presented by mean score, standard deviation of the score, mean time and standard deviation of the time. Additionally, a convergence plot on two of the benchmark functions was presented. The algorithm results obtained often differed from those presented in the publications, but the performance of some of the algorithms was better or comparable to PSO[1], DE[2], and GA[3]. The best wolf algorithm was found to be Grey Wolf Optimizer[4].Algorytmy optymalizacyjne zyskały uznanie jako szybki i konsekwentny sposób rozwiązywania problemów optymalizacyjnych. W ostatnim czasie wilki są coraz częściej wykorzystywane jako inspiracja do tworzenia algorytmów, jak i w projektach używających tych algorytmów. W niniejszej pracy opisano sześć wybranych algorytmów. Następnie zaimplementowano je w języku R i porównano z pomocą sześciu funkcji porównujących, tzw. benchmarków. Wyniki trzydziestu testów na każdej z funkcji zaprezentowano za pomocą średniego wyniku, odchylenia standardowego wyniku, średniego czasu oraz odchylenia standardowego czasu. Dodatkowo zaprezentowano wykres zbieżności na dwóch z funkcji porównujących. Uzyskane wyniki algorytmów często różniły się od tych zaprezentowanych w publikacjach, jednak skuteczność części z nich była lepsza bądź porównywalna z PSO[1], DE[2] i GA[3]. Najlepszym wilczym algorytmem okazał się Grey Wolf Optimizer[4]

    How meta-heuristic algorithms contribute to deep learning in the hype of big data analytics

    Get PDF
    Deep learning (DL) is one of the most emerging types of contemporary machine learning techniques that mimic the cognitive patterns of animal visual cortex to learn the new abstract features automatically by deep and hierarchical layers. DL is believed to be a suitable tool so far for extracting insights from very huge volume of so-called big data. Nevertheless, one of the three “V” or big data is velocity that implies the learning has to be incremental as data are accumulating up rapidly. DL must be fast and accurate. By the technical design of DL, it is extended from feed-forward artificial neural network with many multi-hidden layers of neurons called deep neural network (DNN). In the training process of DNN, it has certain inefficiency due to very long training time required. Obtaining the most accurate DNN within a reasonable run-time is a challenge, given there are potentially many parameters in the DNN model configuration and high dimensionality of the feature space in the training dataset. Meta-heuristic has a history of optimizing machine learning models successfully. How well meta-heuristic could be used to optimize DL in the context of big data analytics is a thematic topic which we pondered on in this paper. As a position paper, we review the recent advances of applying meta-heuristics on DL, discuss about their pros and cons and point out some feasible research directions for bridging the gaps between meta-heuristics and DL

    Feature extraction and selection algorithm based on self adaptive ant colony system for sky image classification

    Get PDF
    Sky image classification is crucial in meteorology to forecast weather and climatic conditions. The fine-grained cloud detection and recognition (FGCDR) algorithm is use to extract colour, inside texture and neighbour texture features from multiview of superpixels sky images. However, the FGCDR produced a substantial amount of redundant and insignificant features. The ant colony optimisation (ACO) algorithm have been used to select feature subset. However, the ACO suffers from premature convergence which leads to poor feature subset. Therefore, an improved feature extraction and selection for sky image classification (FESSIC) algorithm is proposed. This algorithm consists of (i) Gaussian smoothness standard deviation method that formulates informative features within sky images; (ii) nearest-threshold based technique that converts feature map into a weighted directed graph to represent relationship between features; and (iii) an ant colony system with self-adaptive parameter technique for local pheromone update. The performance of FESSIC was evaluated against ten benchmark image classification algorithms and six classifiers on four ground-based sky image datasets. The Friedman test result is presented for the performance rank of six benchmark feature selection algorithms and FESSIC algorithm. The Man-Whitney U test is then performed to statistically evaluate the significance difference of the second rank and FESSIC algorithms. The experimental results for the proposed algorithm are superior to the benchmark image classification algorithms in terms of similarity value on Kiel, SWIMCAT and MGCD datasets. FESSIC outperforms other algorithms for average classification accuracy for the KSVM, MLP, RF and DT classifiers. The Friedman test has shown that the FESSIC has the first rank for all classifiers. Furthermore, the result of Man-Whitney U test indicates that FESSIC is significantly better than the second rank benchmark algorithm for all classifiers. In conclusion, the FESSIC can be utilised for image classification in various applications such as disaster management, medical diagnosis, industrial inspection, sports management, and content-based image retrieval

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF

    Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory

    No full text
    Nowadays, swarm intelligence algorithms are becoming increasingly popular for solving many optimization problems. The Wolf Search Algorithm (WSA) is a contemporary semi-swarm intelligence algorithm designed to solve complex optimization problems and demonstrated its capability especially for large-scale problems. However, it still inherits a common weakness for other swarm intelligence algorithms: that its performance is heavily dependent on the chosen values of the control parameters. In 2016, we published the Self-Adaptive Wolf Search Algorithm (SAWSA), which offers a simple solution to the adaption problem. As a very simple schema, the original SAWSA adaption is based on random guesses, which is unstable and naive. In this paper, based on the SAWSA, we investigate the WSA search behaviour more deeply. A new parameter-guided updater, the Gaussian-guided parameter control mechanism based on information entropy theory, is proposed as an enhancement of the SAWSA. The heuristic updating function is improved. Simulation experiments for the new method denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA) validate the increased performance of the improved version of WSA in comparison to its standard version and other prevalent swarm algorithms
    corecore