7,212 research outputs found

    A New Optimization Algorithm Based on Search and Rescue Operations

    Full text link
    [EN] In this paper, a new optimization algorithm called the search and rescue optimization algorithm (SAR) is proposed for solving single-objective continuous optimization problems. SAR is inspired by the explorations carried out by humans during search and rescue operations. The performance of SAR was evaluated on fifty-five optimization functions including a set of classic benchmark functions and a set of modern CEC 2013 benchmark functions from the literature. The obtained results were compared with twelve optimization algorithms including well-known optimization algorithms, recent variants of GA, DE, CMA-ES, and PSO, and recent metaheuristic algorithms. The Wilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. The statistical results indicated SAR is highly competitive with the compared algorithms. Also, in order to evaluate the application of SAR on real-world optimization problems, it was applied to three engineering design problems, and the results revealed that SAR is able to find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. Thus, the proposed algorithm can be considered an efficient optimization method for real-world optimization problems.This study was partially supported by the Spanish Research Project (nos. TIN2016-80856-R and TIN2015-65515-C4-1-R).Shabani, A.; Asgarian, B.; Gharebaghi, SA.; Salido Gregorio, MA.; Giret Boggino, AS. (2019). A New Optimization Algorithm Based on Search and Rescue Operations. Mathematical Problems in Engineering. 2019:1-23. https://doi.org/10.1155/2019/2482543S1232019Bianchi, L., Dorigo, M., Gambardella, L. M., & Gutjahr, W. J. (2008). A survey on metaheuristics for stochastic combinatorial optimization. Natural Computing, 8(2), 239-287. doi:10.1007/s11047-008-9098-4Holland, J. H. (1992). Genetic Algorithms. Scientific American, 267(1), 66-72. doi:10.1038/scientificamerican0792-66Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 26(1), 29-41. doi:10.1109/3477.484436Manjarres, D., Landa-Torres, I., Gil-Lopez, S., Del Ser, J., Bilbao, M. N., Salcedo-Sanz, S., & Geem, Z. W. (2013). A survey on applications of the harmony search algorithm. Engineering Applications of Artificial Intelligence, 26(8), 1818-1831. doi:10.1016/j.engappai.2013.05.008Karaboga, D., Gorkemli, B., Ozturk, C., & Karaboga, N. (2012). A comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artificial Intelligence Review, 42(1), 21-57. doi:10.1007/s10462-012-9328-0Rao, R. V., Savsani, V. J., & Vakharia, D. P. (2011). Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer-Aided Design, 43(3), 303-315. doi:10.1016/j.cad.2010.12.015Zhang, C., Lin, Q., Gao, L., & Li, X. (2015). Backtracking Search Algorithm with three constraint handling methods for constrained optimization problems. Expert Systems with Applications, 42(21), 7831-7845. doi:10.1016/j.eswa.2015.05.050Yang, X. S. (2010). Firefly algorithm, stochastic test functions and design optimisation. International Journal of Bio-Inspired Computation, 2(2), 78. doi:10.1504/ijbic.2010.032124Punnathanam, V., & Kotecha, P. (2016). Yin-Yang-pair Optimization: A novel lightweight optimization algorithm. Engineering Applications of Artificial Intelligence, 54, 62-79. doi:10.1016/j.engappai.2016.04.004Zhao, C., Wu, C., Chai, J., Wang, X., Yang, X., Lee, J.-M., & Kim, M. J. (2017). Decomposition-based multi-objective firefly algorithm for RFID network planning with uncertainty. Applied Soft Computing, 55, 549-564. doi:10.1016/j.asoc.2017.02.009Zhao, C., Wu, C., Wang, X., Ling, B. W.-K., Teo, K. L., Lee, J.-M., & Jung, K.-H. (2017). Maximizing lifetime of a wireless sensor network via joint optimizing sink placement and sensor-to-sink routing. Applied Mathematical Modelling, 49, 319-337. doi:10.1016/j.apm.2017.05.001Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82. doi:10.1109/4235.585893Simon, D. (2008). Biogeography-Based Optimization. IEEE Transactions on Evolutionary Computation, 12(6), 702-713. doi:10.1109/tevc.2008.919004Garg, H. (2015). An efficient biogeography based optimization algorithm for solving reliability optimization problems. Swarm and Evolutionary Computation, 24, 1-10. doi:10.1016/j.swevo.2015.05.001Storn, R., & Price, K. (1997). Journal of Global Optimization, 11(4), 341-359. doi:10.1023/a:1008202821328Das, S., Mullick, S. S., & Suganthan, P. N. (2016). Recent advances in differential evolution – An updated survey. Swarm and Evolutionary Computation, 27, 1-30. doi:10.1016/j.swevo.2016.01.004Couzin, I. D., Krause, J., Franks, N. R., & Levin, S. A. (2005). Effective leadership and decision-making in animal groups on the move. Nature, 433(7025), 513-516. doi:10.1038/nature03236Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation, 17(12), 4831-4845. doi:10.1016/j.cnsns.2012.05.010Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey Wolf Optimizer. Advances in Engineering Software, 69, 46-61. doi:10.1016/j.advengsoft.2013.12.007Erol, O. K., & Eksin, I. (2006). A new optimization method: Big Bang–Big Crunch. Advances in Engineering Software, 37(2), 106-111. doi:10.1016/j.advengsoft.2005.04.005Kaveh, A., & Mahdavi, V. R. (2014). Colliding bodies optimization: A novel meta-heuristic method. Computers & Structures, 139, 18-27. doi:10.1016/j.compstruc.2014.04.005Rashedi, E., Nezamabadi-pour, H., & Saryazdi, S. (2009). GSA: A Gravitational Search Algorithm. Information Sciences, 179(13), 2232-2248. doi:10.1016/j.ins.2009.03.004Zheng, Y.-J. (2015). Water wave optimization: A new nature-inspired metaheuristic. Computers & Operations Research, 55, 1-11. doi:10.1016/j.cor.2014.10.008Kaveh, A., & Khayatazad, M. (2012). A new meta-heuristic method: Ray Optimization. Computers & Structures, 112-113, 283-294. doi:10.1016/j.compstruc.2012.09.003Glover, F. (1989). Tabu Search—Part I. ORSA Journal on Computing, 1(3), 190-206. doi:10.1287/ijoc.1.3.190Chiang, H.-P., Chou, Y.-H., Chiu, C.-H., Kuo, S.-Y., & Huang, Y.-M. (2013). A quantum-inspired Tabu search algorithm for solving combinatorial optimization problems. Soft Computing, 18(9), 1771-1781. doi:10.1007/s00500-013-1203-7Mousavirad, S. J., & Ebrahimpour-Komleh, H. (2017). Human mental search: a new population-based metaheuristic optimization algorithm. Applied Intelligence, 47(3), 850-887. doi:10.1007/s10489-017-0903-6Karaboga, D., & Basturk, B. (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimization, 39(3), 459-471. doi:10.1007/s10898-007-9149-xRao, R. V., Savsani, V. J., & Vakharia, D. P. (2012). Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Information Sciences, 183(1), 1-15. doi:10.1016/j.ins.2011.08.006Digalakis, J. G., & Margaritis, K. G. (2001). On benchmarking functions for genetic algorithms. International Journal of Computer Mathematics, 77(4), 481-506. doi:10.1080/00207160108805080Karaboga, D., & Akay, B. (2009). A comparative study of Artificial Bee Colony algorithm. Applied Mathematics and Computation, 214(1), 108-132. doi:10.1016/j.amc.2009.03.090Lim, T. Y., Al-Betar, M. A., & Khader, A. T. (2015). Adaptive pair bonds in genetic algorithm: An application to real-parameter optimization. Applied Mathematics and Computation, 252, 503-519. doi:10.1016/j.amc.2014.12.030Fleury, C., & Braibant, V. (1986). Structural optimization: A new dual method using mixed variables. International Journal for Numerical Methods in Engineering, 23(3), 409-428. doi:10.1002/nme.1620230307Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1(1), 3-18. doi:10.1016/j.swevo.2011.02.002Gandomi, A. H., Yang, X.-S., & Alavi, A. H. (2011). Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Engineering with Computers, 29(1), 17-35. doi:10.1007/s00366-011-0241-yWang, G. G. (2003). Adaptive Response Surface Method Using Inherited Latin Hypercube Design Points. Journal of Mechanical Design, 125(2), 210-220. doi:10.1115/1.1561044Cheng, M.-Y., & Prayogo, D. (2014). Symbiotic Organisms Search: A new metaheuristic optimization algorithm. Computers & Structures, 139, 98-112. doi:10.1016/j.compstruc.2014.03.007CHICKERMANE, H., & GEA, H. C. (1996). STRUCTURAL OPTIMIZATION USING A NEW LOCAL APPROXIMATION METHOD. International Journal for Numerical Methods in Engineering, 39(5), 829-846. doi:10.1002/(sici)1097-0207(19960315)39:53.0.co;2-uChou, J.-S., & Ngo, N.-T. (2016). Modified firefly algorithm for multidimensional optimization in structural design problems. Structural and Multidisciplinary Optimization, 55(6), 2013-2028. doi:10.1007/s00158-016-1624-xSonmez, M. (2011). Artificial Bee Colony algorithm for optimization of truss structures. Applied Soft Computing, 11(2), 2406-2418. doi:10.1016/j.asoc.2010.09.003Degertekin, S. O. (2012). Improved harmony search algorithms for sizing optimization of truss structures. Computers & Structures, 92-93, 229-241. doi:10.1016/j.compstruc.2011.10.022Degertekin, S. O., & Hayalioglu, M. S. (2013). Sizing truss structures using teaching-learning-based optimization. Computers & Structures, 119, 177-188. doi:10.1016/j.compstruc.2012.12.011Talatahari, S., Kheirollahi, M., Farahmandpour, C., & Gandomi, A. H. (2012). A multi-stage particle swarm for optimum design of truss structures. Neural Computing and Applications, 23(5), 1297-1309. doi:10.1007/s00521-012-1072-5Kaveh, A., Bakhshpoori, T., & Afshari, E. (2014). An efficient hybrid Particle Swarm and Swallow Swarm Optimization algorithm. Computers & Structures, 143, 40-59. doi:10.1016/j.compstruc.2014.07.012Kaveh, A., & Bakhshpoori, T. (2016). A new metaheuristic for continuous structural optimization: water evaporation optimization. Structural and Multidisciplinary Optimization, 54(1), 23-43. doi:10.1007/s00158-015-1396-8Jalili, S., & Hosseinzadeh, Y. (2015). A Cultural Algorithm for Optimal Design of Truss Structures. Latin American Journal of Solids and Structures, 12(9), 1721-1747. doi:10.1590/1679-7825154

    Metaheuristic Algorithms for Convolution Neural Network

    Get PDF
    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).Comment: Article ID 1537325, 13 pages. Received 29 January 2016; Revised 15 April 2016; Accepted 10 May 2016. Academic Editor: Martin Hagan. in Hindawi Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016

    A Global Optimisation Toolbox for Massively Parallel Engineering Optimisation

    Full text link
    A software platform for global optimisation, called PaGMO, has been developed within the Advanced Concepts Team (ACT) at the European Space Agency, and was recently released as an open-source project. PaGMO is built to tackle high-dimensional global optimisation problems, and it has been successfully used to find solutions to real-life engineering problems among which the preliminary design of interplanetary spacecraft trajectories - both chemical (including multiple flybys and deep-space maneuvers) and low-thrust (limited, at the moment, to single phase trajectories), the inverse design of nano-structured radiators and the design of non-reactive controllers for planetary rovers. Featuring an arsenal of global and local optimisation algorithms (including genetic algorithms, differential evolution, simulated annealing, particle swarm optimisation, compass search, improved harmony search, and various interfaces to libraries for local optimisation such as SNOPT, IPOPT, GSL and NLopt), PaGMO is at its core a C++ library which employs an object-oriented architecture providing a clean and easily-extensible optimisation framework. Adoption of multi-threaded programming ensures the efficient exploitation of modern multi-core architectures and allows for a straightforward implementation of the island model paradigm, in which multiple populations of candidate solutions asynchronously exchange information in order to speed-up and improve the optimisation process. In addition to the C++ interface, PaGMO's capabilities are exposed to the high-level language Python, so that it is possible to easily use PaGMO in an interactive session and take advantage of the numerous scientific Python libraries available.Comment: To be presented at 'ICATT 2010: International Conference on Astrodynamics Tools and Techniques

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Fitness Landscape-Based Characterisation of Nature-Inspired Algorithms

    Full text link
    A significant challenge in nature-inspired algorithmics is the identification of specific characteristics of problems that make them harder (or easier) to solve using specific methods. The hope is that, by identifying these characteristics, we may more easily predict which algorithms are best-suited to problems sharing certain features. Here, we approach this problem using fitness landscape analysis. Techniques already exist for measuring the "difficulty" of specific landscapes, but these are often designed solely with evolutionary algorithms in mind, and are generally specific to discrete optimisation. In this paper we develop an approach for comparing a wide range of continuous optimisation algorithms. Using a fitness landscape generation technique, we compare six different nature-inspired algorithms and identify which methods perform best on landscapes exhibiting specific features.Comment: 10 pages, 1 figure, submitted to the 11th International Conference on Adaptive and Natural Computing Algorithm

    Bat Algorithm: Literature Review and Applications

    Full text link
    Bat algorithm (BA) is a bio-inspired algorithm developed by Yang in 2010 and BA has been found to be very efficient. As a result, the literature has expanded significantly in the last 3 years. This paper provides a timely review of the bat algorithm and its new variants. A wide range of diverse applications and case studies are also reviewed and summarized briefly here. Further research topics are also discussed.Comment: 10 page

    DIFFERENTIAL EVOLUTION FOR OPTIMIZATION OF PID GAIN IN ELECTRICAL DISCHARGE MACHINING CONTROL SYSTEM

    Get PDF
    ABSTRACT PID controller of servo control system maintains the gap between Electrode and workpiece in Electrical Dis- charge Machining (EDM). Capability of the controller is significant since machining process is a stochastic phenomenon and physical behaviour of the discharge is unpredictable. Therefore, a Proportional Integral Derivative (PID) controller using Differential Evolution (DE) algorithm is designed and applied to an EDM servo actuator system in order to find suitable gain parameters. Simulation results verify the capabilities and effectiveness of the DE algorithm to search the best configuration of PID gain to maintain the electrode position. Keywords: servo control system; electrical discharge machining; proportional integral derivative; con- troller tuning; differential evolution
    • …
    corecore