8 research outputs found

    A review on probabilistic graphical models in evolutionary computation

    Get PDF
    Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms

    Globally multimodal problem optimization via an estimation of distribution algorithm based on unsupervised learning of Bayesian networks

    Full text link
    Many optimization problems are what can be called globally multimodal, i.e., they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms, making their effectiveness and efficiency degrade, due to genetic drift. With the aim of overcoming these drawbacks for discrete globally multimodal problem optimization, this paper introduces and evaluates a new estimation of distribution algorithm based on unsupervised learning of Bayesian networks. We report the satisfactory results of our experiments with symmetrical binary optimization problems

    An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics

    Full text link
    [EN] The optimization methods and, in particular, metaheuristics must be constantly improved to reduce execution times, improve the results, and thus be able to address broader instances. In particular, addressing combinatorial optimization problems is critical in the areas of operational research and engineering. In this work, a perturbation operator is proposed which uses the k-nearest neighbors technique, and this is studied with the aim of improving the diversification and intensification properties of metaheuristic algorithms in their binary version. Random operators are designed to study the contribution of the perturbation operator. To verify the proposal, large instances of the well-known set covering problem are studied. Box plots, convergence charts, and the Wilcoxon statistical test are used to determine the operator contribution. Furthermore, a comparison is made using metaheuristic techniques that use general binarization mechanisms such as transfer functions or db-scan as binarization methods. The results obtained indicate that the KNN perturbation operator improves significantly the results.The first author was supported by the Grant CONICYT/FONDECYT/INICIACION/11180056.García, J.; Astorga, G.; Yepes, V. (2021). An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics. Mathematics. 9(3):1-20. https://doi.org/10.3390/math9030225S12093Al-Madi, N., Faris, H., & Mirjalili, S. (2019). Binary multi-verse optimization algorithm for global optimization and discrete problems. International Journal of Machine Learning and Cybernetics, 10(12), 3445-3465. doi:10.1007/s13042-019-00931-8García, J., Moraga, P., Valenzuela, M., Crawford, B., Soto, R., Pinto, H., … Astorga, G. (2019). A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems. Computational Intelligence and Neuroscience, 2019, 1-16. doi:10.1155/2019/3238574Guo, H., Liu, B., Cai, D., & Lu, T. (2016). Predicting protein–protein interaction sites using modified support vector machine. International Journal of Machine Learning and Cybernetics, 9(3), 393-398. doi:10.1007/s13042-015-0450-6Korkmaz, S., Babalik, A., & Kiran, M. S. (2017). An artificial algae algorithm for solving binary optimization problems. International Journal of Machine Learning and Cybernetics, 9(7), 1233-1247. doi:10.1007/s13042-017-0772-7García, J., Martí, J. V., & Yepes, V. (2020). The Buttressed Walls Problem: An Application of a Hybrid Clustering Particle Swarm Optimization Algorithm. Mathematics, 8(6), 862. doi:10.3390/math8060862Yepes, V., Martí, J. V., & García, J. (2020). Black Hole Algorithm for Sustainable Design of Counterfort Retaining Walls. Sustainability, 12(7), 2767. doi:10.3390/su12072767Talbi, E.-G. (2015). Combining metaheuristics with mathematical programming, constraint programming and machine learning. Annals of Operations Research, 240(1), 171-215. doi:10.1007/s10479-015-2034-yJuan, A. A., Faulin, J., Grasman, S. E., Rabe, M., & Figueira, G. (2015). A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems. Operations Research Perspectives, 2, 62-72. doi:10.1016/j.orp.2015.03.001Chou, J.-S., & Nguyen, T.-K. (2018). Forward Forecast of Stock Price Using Sliding-Window Metaheuristic-Optimized Machine-Learning Regression. IEEE Transactions on Industrial Informatics, 14(7), 3132-3142. doi:10.1109/tii.2018.2794389Zheng, B., Zhang, J., Yoon, S. W., Lam, S. S., Khasawneh, M., & Poranki, S. (2015). Predictive modeling of hospital readmissions using metaheuristics and data mining. Expert Systems with Applications, 42(20), 7110-7120. doi:10.1016/j.eswa.2015.04.066De León, A. D., Lalla-Ruiz, E., Melián-Batista, B., & Marcos Moreno-Vega, J. (2017). A Machine Learning-based system for berth scheduling at bulk terminals. Expert Systems with Applications, 87, 170-182. doi:10.1016/j.eswa.2017.06.010García, J., Lalla-Ruiz, E., Voß, S., & Droguett, E. L. (2020). Enhancing a machine learning binarization framework by perturbation operators: analysis on the multidimensional knapsack problem. International Journal of Machine Learning and Cybernetics, 11(9), 1951-1970. doi:10.1007/s13042-020-01085-8García, J., Crawford, B., Soto, R., & Astorga, G. (2019). A clustering algorithm applied to the binarization of Swarm intelligence continuous metaheuristics. Swarm and Evolutionary Computation, 44, 646-664. doi:10.1016/j.swevo.2018.08.006García, J., Crawford, B., Soto, R., Castro, C., & Paredes, F. (2017). A k-means binarization framework applied to multidimensional knapsack problem. Applied Intelligence, 48(2), 357-380. doi:10.1007/s10489-017-0972-6Dokeroglu, T., Sevinc, E., Kucukyilmaz, T., & Cosar, A. (2019). A survey on new generation metaheuristic algorithms. Computers & Industrial Engineering, 137, 106040. doi:10.1016/j.cie.2019.106040Zong Woo Geem, Joong Hoon Kim, & Loganathan, G. V. (2001). A New Heuristic Optimization Algorithm: Harmony Search. SIMULATION, 76(2), 60-68. doi:10.1177/003754970107600201Rashedi, E., Nezamabadi-pour, H., & Saryazdi, S. (2009). GSA: A Gravitational Search Algorithm. Information Sciences, 179(13), 2232-2248. doi:10.1016/j.ins.2009.03.004Rao, R. V., Savsani, V. J., & Vakharia, D. P. (2011). Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer-Aided Design, 43(3), 303-315. doi:10.1016/j.cad.2010.12.015Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation, 17(12), 4831-4845. doi:10.1016/j.cnsns.2012.05.010Cuevas, E., & Cienfuegos, M. (2014). A new algorithm inspired in the behavior of the social-spider for constrained optimization. Expert Systems with Applications, 41(2), 412-425. doi:10.1016/j.eswa.2013.07.067Xu, L., Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2008). SATzilla: Portfolio-based Algorithm Selection for SAT. Journal of Artificial Intelligence Research, 32, 565-606. doi:10.1613/jair.2490Smith-Miles, K., & van Hemert, J. (2011). Discovering the suitability of optimisation algorithms by learning from evolved instances. Annals of Mathematics and Artificial Intelligence, 61(2), 87-104. doi:10.1007/s10472-011-9230-5Peña, J. M., Lozano, J. A., & Larrañaga, P. (2005). Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks. Evolutionary Computation, 13(1), 43-66. doi:10.1162/1063656053583432Hutter, F., Xu, L., Hoos, H. H., & Leyton-Brown, K. (2014). Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206, 79-111. doi:10.1016/j.artint.2013.10.003Eiben, A. E., & Smit, S. K. (2011). Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm and Evolutionary Computation, 1(1), 19-31. doi:10.1016/j.swevo.2011.02.001García, J., Yepes, V., & Martí, J. V. (2020). A Hybrid k-Means Cuckoo Search Algorithm Applied to the Counterfort Retaining Walls Problem. Mathematics, 8(4), 555. doi:10.3390/math8040555García, J., Moraga, P., Valenzuela, M., & Pinto, H. (2020). A db-Scan Hybrid Algorithm: An Application to the Multidimensional Knapsack Problem. Mathematics, 8(4), 507. doi:10.3390/math8040507Poikolainen, I., Neri, F., & Caraffini, F. (2015). Cluster-Based Population Initialization for differential evolution frameworks. Information Sciences, 297, 216-235. doi:10.1016/j.ins.2014.11.026García, J., & Maureira, C. (2021). A KNN quantum cuckoo search algorithm applied to the multidimensional knapsack problem. Applied Soft Computing, 102, 107077. doi:10.1016/j.asoc.2020.107077Rice, J. R. (1976). The Algorithm Selection Problem. Advances in Computers Volume 15, 65-118. doi:10.1016/s0065-2458(08)60520-3Burke, E. K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Özcan, E., & Qu, R. (2013). Hyper-heuristics: a survey of the state of the art. Journal of the Operational Research Society, 64(12), 1695-1724. doi:10.1057/jors.2013.71Florez-Lozano, J., Caraffini, F., Parra, C., & Gongora, M. (2020). Cooperative and distributed decision-making in a multi-agent perception system for improvised land mines detection. Information Fusion, 64, 32-49. doi:10.1016/j.inffus.2020.06.009Crawford, B., Soto, R., Astorga, G., García, J., Castro, C., & Paredes, F. (2017). Putting Continuous Metaheuristics to Work in Binary Search Spaces. Complexity, 2017, 1-19. doi:10.1155/2017/8404231Mafarja, M., Aljarah, I., Heidari, A. A., Faris, H., Fournier-Viger, P., Li, X., & Mirjalili, S. (2018). Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowledge-Based Systems, 161, 185-204. doi:10.1016/j.knosys.2018.08.003Feng, Y., An, H., & Gao, X. (2018). The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm. Mathematics, 7(1), 17. doi:10.3390/math7010017Zhang, G. (2010). Quantum-inspired evolutionary algorithms: a survey and empirical study. Journal of Heuristics, 17(3), 303-351. doi:10.1007/s10732-010-9136-0Srikanth, K., Panwar, L. K., Panigrahi, B., Herrera-Viedma, E., Sangaiah, A. K., & Wang, G.-G. (2018). Meta-heuristic framework: Quantum inspired binary grey wolf optimizer for unit commitment problem. Computers & Electrical Engineering, 70, 243-260. doi:10.1016/j.compeleceng.2017.07.023Hu, H., Yang, K., Liu, L., Su, L., & Yang, Z. (2019). Short-Term Hydropower Generation Scheduling Using an Improved Cloud Adaptive Quantum-Inspired Binary Social Spider Optimization Algorithm. Water Resources Management, 33(7), 2357-2379. doi:10.1007/s11269-018-2138-7Gao, Y. J., Zhang, F. M., Zhao, Y., & Li, C. (2019). A novel quantum-inspired binary wolf pack algorithm for difficult knapsack problem. International Journal of Wireless and Mobile Computing, 16(3), 222. doi:10.1504/ijwmc.2019.099861Kumar, Y., Verma, S. K., & Sharma, S. (2020). Quantum-inspired binary gravitational search algorithm to recognize the facial expressions. International Journal of Modern Physics C, 31(10), 2050138. doi:10.1142/s0129183120501387Balas, E., & Padberg, M. W. (1976). Set Partitioning: A survey. SIAM Review, 18(4), 710-760. doi:10.1137/1018115Borneman, J., Chrobak, M., Della Vedova, G., Figueroa, A., & Jiang, T. (2001). Probe selection algorithms with applications in the analysis of microbial communities. Bioinformatics, 17(Suppl 1), S39-S48. doi:10.1093/bioinformatics/17.suppl_1.s39Boros, E., Hammer, P. L., Ibaraki, T., & Kogan, A. (1997). Logical analysis of numerical data. Mathematical Programming, 79(1-3), 163-190. doi:10.1007/bf02614316Balas, E., & Carrera, M. C. (1996). A Dynamic Subgradient-Based Branch-and-Bound Procedure for Set Covering. Operations Research, 44(6), 875-890. doi:10.1287/opre.44.6.875Beasley, J. E. (1987). An algorithm for set covering problem. European Journal of Operational Research, 31(1), 85-93. doi:10.1016/0377-2217(87)90141-xBeasley, J. E. (1990). A lagrangian heuristic for set-covering problems. Naval Research Logistics, 37(1), 151-164. doi:10.1002/1520-6750(199002)37:13.0.co;2-2Beasley, J. ., & Chu, P. . (1996). A genetic algorithm for the set covering problem. European Journal of Operational Research, 94(2), 392-404. doi:10.1016/0377-2217(95)00159-xSoto, R., Crawford, B., Olivares, R., Barraza, J., Figueroa, I., Johnson, F., … Olguín, E. (2017). Solving the non-unicost set covering problem by using cuckoo search and black hole optimization. Natural Computing, 16(2), 213-229. doi:10.1007/s11047-016-9609-

    Regularized model learning in EDAs for continuous and multi-objective optimization

    Get PDF
    Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods

    Multi-objective Bayesian Artificial Immune System: Empirical Evaluation And Comparative Analyses

    No full text
    Recently, we have proposed a Multi-Objective Bayesian Artificial Immune System (MOBAIS) to deal effectively with building blocks (high-quality partial solutions coded in the solution vector) in combinatorial multi-objective problems. By replacing the mutation and cloning operators with a probabilistic model, more specifically a Bayesian network representing the joint distribution of promising solutions, MOBAIS takes into account the relationships among the variables of the problem, avoiding the disruption of already obtained high-quality partial solutions. The preliminary results have indicated that our proposal is able to properly build the Pareto front. Motivated by this scenario, this paper better formalizes the proposal and investigates its usefulness on more challenging problems. In addition, an important enhancement regarding the Bayesian network learning was incorporated into the algorithm in order to speed up its execution. To conclude, we compare MOBAIS with state-of-the-art algorithms taking into account quantitative aspects of the Pareto front found by the algorithms. MOBAIS outperforms the contenders in terms of the quality of the obtained solutions and requires an amount of computational resource inferior or compatible with the contenders. © 2009 Springer Science+Business Media B.V.82151173Ada, G.L., Nossal, G.J.V., The clonal selection theory (1987) Sci. Am., 257 (2), pp. 50-57Baluja, S., (1994) Population-based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning, , Technical Report, Carnegie Mellon University, PittsburghBaluja, S., Davies, S., Using optimal dependency-trees for combinational optimization (1997) Proc. of the 14th Int. Conf. on Machine Learning, pp. 30-38. , San FranciscoDe Bonet, J.S., Isbell, C.L., MIMIC: Finding optima by estimating probability densities (1997) Adv. Neural Inf. Process. Syst., 9, p. 424Castro, P.A.D., Von Zuben, F.J., Bayesian learning of neural networks by means of artificial immune systems (2006) Proc. of the 5th Int. Joint Conf. on Neural Networks, pp. 9885-9892Castro, P.A.D., Von Zuben, F.J., BAIS: A Bayesian artificial immune system for the effective handling of building blocks (2009) Inf. Sci., , in pressCastro, P.A.D., Von Zuben, F.J., Feature subset selection by means of a Bayesian artificial immune system (2008) Proc. of the 8th Int. Conf. on Hybrid Intelligent Systems, pp. 561-56Castro, P.A.D., Von Zuben, F.J., MOBAIS: A Bayesian artificial immune system for multi-objective optimization (2008) Proc. of the 7th Int. Conf. on Artificial Immune Systems, pp. 48-59Chen, J., Mahfouf, M., Bersini, H., Carneiro, J., A population adaptive based immune algorithm for solving multi-objective optimization problems (2006) Lecture Notes in Computer Sciences - Artificial Immune Systems, Vol. 4163, pp. 280-293. , Springer New YorkChickering, D.M., Learning Bayesian networks is NP-complete (1996) Learning from Data: Artificial Intelligence and Statistics v, pp. 121-130. , Springer New YorkCoelho, G.P., Von Zuben, F.J., Bersini, H., Carneiro, J., Omni-aiNet: An immune-inspired approach for omni optimization (2006) Lecture Notes in Computer Sciences-Artificial Immune Systems, Vol. 4163, pp. 294-308. , Springer New YorkCoello Coello, C., An approach to solve multiobjective optimization problems based on an artificial immune system (2002) Proc. of the 1st Int. Conf. on Artificial Immune System, pp. 212-221Coello Coello, C., Cortés, N.C., Solving multiobjective optimization problems using an artificial immune system (2005) Genet. Program. Evolv. Mach., 6 (2), pp. 163-190Cooper, G., Herskovits, E., A Bayesian method for the induction of probabilistic networks from data (1992) Mach. Learn., 9, pp. 309-347Dasgupta, D., Advances in artificial immune systems (2006) IEEE Computational Intelligence Magazine, 1 (4), pp. 40-43. , DOI 10.1109/CI-M.2006.248056De Castro, L.N., Timmis, J., (2002) Artificial Immune Systems: A New Computational Intelligence Approach, , Springer New YorkDe Castro, L.N., Von Zuben, F.J., Learning and optimization using the clonal selection principle (2002) IEEE Trans. Evol. Comput., 6 (3), pp. 239-251De Castro, L.N., Timmis, J., An artificial immune network for multimodal optimisation (2002) Proc. of the IEEE World Congress on Evolutionary Computation, pp. 669-674Deb, K., Multi-objective genetic algorithms: Problem difficulties and construction of test problems (1999) Evol. Comput., 7, pp. 205-230Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., A fast and elitist multiobjective genetic algorithm: NSGA-II (2002) IEEE Transactions on Evolutionary Computation, 6 (2), pp. 182-197. , DOI 10.1109/4235.996017, PII S1089778X02041012Deb, K., Tiwari, S., Omni-optimizer: A procedure for single and multi-objective optimization (2005) Proc. of the of EMO, pp. 47-61Freschi, F., Repetto, M., VIS: An artificial immune network for multi-objective optimization (2006) Engineering Optimization, 38 (8), pp. 975-996. , DOI 10.1080/03052150600880706, PII U33283783QQ71P54Goldberg, D.E., Deb, K., Kargupta, H., Harik, G., Rapid accurate optimization of difficult problems using fast messy genetic algorithms (1993) Proc. of the Fifth Int. Conf. on Genetic Algorithms, pp. 56-64. , Morgan Kaufmann San FranciscoGoldberg, D.E., Korb, G., Deb, K., Messy genetic algorithms: Motivation, analysis, and first results (1989) Complex Syst., 3, pp. 493-530Goldberg, D.E., (1989) Genetic Algorithms in Search, Optimization, and Machine Learning, , Addison-Wesley ReadingHenrion, M., Propagating uncertainty in Bayesian networks by probabilistic logic sampling (1998) Uncertainty Artif. Intell., 2, pp. 149-163Holland, J.H., (1992) Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, , MIT CambridgeJerne, N.K., Towards a network theory of the immune system (1974) Ann. Immunol. (Inst. Pasteur), 125 C, pp. 373-389Khan, N., Goldberg, D.E., Pelikan, M., (2002) Multi-objective Bayesian Optimization Algorithm, , Technical report, University of Illinois, Illigal Report 2002009Luh, G.-C., Chueh, C.-H., Liu, W.-M., MOIA: Multi-objective immune algorithm (2003) Eng. Optim., 35 (2), pp. 143-164Mühlenbein, H., From recombination of genes to the estimation of distributions I. Binary parameters (1996) Proc. of the 4th Int. Conf. on Parallel Problem Solving from Nature, pp. 178-187Mühlenbein, H., Mahnig, T., FDA-a scalable evolutionary algorithm for the optimization of additively decomposed functions (1999) Evol. Comput., 7, pp. 353-376Osyczka, A., Gero, J.S., Multicriteria optimization for engineering design (1985) Design Optimization, pp. 193-227. , Academic LondonPelikan, M., Goldberg, D., Lobo, F., (1999) A Survey of Optimization by Building and Using Probabilistic Models, , Technical report, University of Illinois, ILLIGAL Report n 99018Pelikan, M., Mühlenbein, H., Roy, R., Furuhashi, T., Chawdhry, P.K., The bivariate marginal distribution algorithm (1999) Advances in Soft Computing-Engineering Design and Manufacturing, pp. 521-535. , Springer LondonPelikan, M., BOA: The Bayesian optimization algorithm (1999) Proc. of the Genetic and Evol. Comput. Conference, 1, pp. 525-532Pelikan, M., Goldberg, D., Lobo, F., A survey of optimization by building and using probabilistic models (2002) Comput. Optim. Appl., 21 (1), pp. 5-20Pelikan, M., Sastry, K., Cantú-Paz, E., (2005) Scalable Optimization Via Probabilistic Modeling: From Algorithms to Applications, , Springer New YorkPeña, J.M., Lozano, J.A., Larrañaga, P., Globally multimodal problem optimization via an estimation of distribution algorithm based on unsupervised learning of Bayesian networks (2005) Evol. Comput., 13, pp. 43-66Schott, J.R., (1995) Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization, , Master's thesis, Dept. of Aeronautics and Astronautics, Massachusetts Institute of TechnologyVan Veldhuizen, D.A., (1999) Multiobjective Evolutionary Algorithms: Classifications, Analysis, and New Innovations, , Ph.D. thesis, Graduate School of Engineering of the Air Force Inst. of Tech., Wright-Patterson AFBYoo, J., Hajela, P., Immune network simulations in multicriterion design (1999) Struct. Optim., 18, pp. 85-94Zitzler, E., Deb, K., Thiele, L., Comparison of multiobjective evolutionary algorithms: Empirical results (2000) Evol. Comput., 8 (2), pp. 173-195Zitzler, E., Thiele, L., Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach (1999) IEEE Trans. Evol. Comput., 3 (4), pp. 257-27
    corecore