1,918 research outputs found

    Multiobjective optimization of electromagnetic structures based on self-organizing migration

    Get PDF
    Práce se zabývá popisem nového stochastického vícekriteriálního optimalizačního algoritmu MOSOMA (Multiobjective Self-Organizing Migrating Algorithm). Je zde ukázáno, že algoritmus je schopen řešit nejrůznější typy optimalizačních úloh (s jakýmkoli počtem kritérií, s i bez omezujících podmínek, se spojitým i diskrétním stavovým prostorem). Výsledky algoritmu jsou srovnány s dalšími běžně používanými metodami pro vícekriteriální optimalizaci na velké sadě testovacích úloh. Uvedli jsme novou techniku pro výpočet metriky rozprostření (spread) založené na hledání minimální kostry grafu (Minimum Spanning Tree) pro problémy mající více než dvě kritéria. Doporučené hodnoty pro parametry řídící běh algoritmu byly určeny na základě výsledků jejich citlivostní analýzy. Algoritmus MOSOMA je dále úspěšně použit pro řešení různých návrhových úloh z oblasti elektromagnetismu (návrh Yagi-Uda antény a dielektrických filtrů, adaptivní řízení vyzařovaného svazku v časové oblasti…).This thesis describes a novel stochastic multi-objective optimization algorithm called MOSOMA (Multi-Objective Self-Organizing Migrating Algorithm). It is shown that MOSOMA is able to solve various types of multi-objective optimization problems (with any number of objectives, unconstrained or constrained problems, with continuous or discrete decision space). The efficiency of MOSOMA is compared with other commonly used optimization techniques on a large suite of test problems. The new procedure based on finding of minimum spanning tree for computing the spread metric for problems with more than two objectives is proposed. Recommended values of parameters controlling the run of MOSOMA are derived according to their sensitivity analysis. The ability of MOSOMA to solve real-life problems from electromagnetics is shown in a few examples (Yagi-Uda and dielectric filters design, adaptive beam forming in time domain…).

    A particle swarm optimization based memetic algorithm for dynamic optimization problems

    Get PDF
    Copyright @ Springer Science + Business Media B.V. 2010.Recently, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are dynamic. This paper investigates a particle swarm optimization (PSO) based memetic algorithm that hybridizes PSO with a local search technique for dynamic optimization problems. Within the framework of the proposed algorithm, a local version of PSO with a ring-shape topology structure is used as the global search operator and a fuzzy cognition local search method is proposed as the local search technique. In addition, a self-organized random immigrants scheme is extended into our proposed algorithm in order to further enhance its exploration capacity for new peaks in the search space. Experimental study over the moving peaks benchmark problem shows that the proposed PSO-based memetic algorithm is robust and adaptable in dynamic environments.This work was supported by the National Nature Science Foundation of China (NSFC) under Grant No. 70431003 and Grant No. 70671020, the National Innovation Research Community Science Foundation of China under Grant No. 60521003, the National Support Plan of China under Grant No. 2006BAH02A09 and the Ministry of Education, science, and Technology in Korea through the Second-Phase of Brain Korea 21 Project in 2009, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and the Hong Kong Polytechnic University Research Grants under Grant G-YH60

    Parameters identification of unknown delayed genetic regulatory networks by a switching particle swarm optimization algorithm

    Get PDF
    The official published version can be found at the link below.This paper presents a novel particle swarm optimization (PSO) algorithm based on Markov chains and competitive penalized method. Such an algorithm is developed to solve global optimization problems with applications in identifying unknown parameters of a class of genetic regulatory networks (GRNs). By using an evolutionary factor, a new switching PSO (SPSO) algorithm is first proposed and analyzed, where the velocity updating equation jumps from one mode to another according to a Markov chain, and acceleration coefficients are dependent on mode switching. Furthermore, a leader competitive penalized multi-learning approach (LCPMLA) is introduced to improve the global search ability and refine the convergent solutions. The LCPMLA can automatically choose search strategy using a learning and penalizing mechanism. The presented SPSO algorithm is compared with some well-known PSO algorithms in the experiments. It is shown that the SPSO algorithm has faster local convergence speed, higher accuracy and algorithm reliability, resulting in better balance between the global and local searching of the algorithm, and thus generating good performance. Finally, we utilize the presented SPSO algorithm to identify not only the unknown parameters but also the coupling topology and time-delay of a class of GRNs.This research was partially supported by the National Natural Science Foundation of PR China (Grant No. 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No. 200802550007), the Key Creative Project of Shanghai Education Community (Grant No. 09ZZ66), the Key Foundation Project of Shanghai (Grant No. 09JC1400700), the Engineering and Physical Sciences Research Council EPSRC of the UK under Grant No. GR/S27658/01, the International Science and Technology Cooperation Project of China under Grant No. 2009DFA32050, an International Joint Project sponsored by the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    Multi-Objective Self-Organizing Migrating Algorithm: Sensitivity on Controlling Parameters

    Get PDF
    In this paper, we investigate the sensitivity of a novel Multi-Objective Self-Organizing Migrating Algorithm (MOSOMA) on setting its control parameters. Usually, efficiency and accuracy of searching for a solution depends on the settings of a used stochastic algorithm, because multi-objective optimization problems are highly non-linear. In the paper, the sensitivity analysis is performed exploiting a large number of benchmark problems having different properties (the number of optimized parameters, the shape of a Pareto front, etc.). The quality of solutions revealed by MOSOMA is evaluated in terms of a generational distance, a spread and a hyper-volume error. Recommendations for proper settings of the algorithm are derived: These recommendations should help a user to set the algorithm for any multi-objective task without prior knowledge about the solved problem

    On controllability of neuronal networks with constraints on the average of control gains

    Get PDF
    Control gains play an important role in the control of a natural or a technical system since they reflect how much resource is required to optimize a certain control objective. This paper is concerned with the controllability of neuronal networks with constraints on the average value of the control gains injected in driver nodes, which are in accordance with engineering and biological backgrounds. In order to deal with the constraints on control gains, the controllability problem is transformed into a constrained optimization problem (COP). The introduction of the constraints on the control gains unavoidably leads to substantial difficulty in finding feasible as well as refining solutions. As such, a modified dynamic hybrid framework (MDyHF) is developed to solve this COP, based on an adaptive differential evolution and the concept of Pareto dominance. By comparing with statistical methods and several recently reported constrained optimization evolutionary algorithms (COEAs), we show that our proposed MDyHF is competitive and promising in studying the controllability of neuronal networks. Based on the MDyHF, we proceed to show the controlling regions under different levels of constraints. It is revealed that we should allocate the control gains economically when strong constraints are considered. In addition, it is found that as the constraints become more restrictive, the driver nodes are more likely to be selected from the nodes with a large degree. The results and methods presented in this paper will provide useful insights into developing new techniques to control a realistic complex network efficiently

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    ACO for continuous function optimization: a performance analysis

    Get PDF
    The performance of the meta-heuristic algorithms often depends on their parameter settings. Appropriate tuning of the underlying parameters can drastically improve the performance of a meta-heuristic. The Ant Colony Optimization (ACO), a population based meta-heuristic algorithm inspired by the foraging behavior of the ants, is no different. Fundamentally, the ACO depends on the construction of new solutions, variable by variable basis using Gaussian sampling of the selected variables from an archive of solutions. A comprehensive performance analysis of the underlying parameters such as: selection strategy, distance measure metric and pheromone evaporation rate of the ACO suggests that the Roulette Wheel Selection strategy enhances the performance of the ACO due to its ability to provide non-uniformity and adequate diversity in the selection of a solution. On the other hand, the Squared Euclidean distance-measure metric offers better performance than other distance-measure metrics. It is observed from the analysis that the ACO is sensitive towards the evaporation rate. Experimental analysis between classical ACO and other meta-heuristic suggested that the performance of the well-tuned ACO surpasses its counterparts

    Evolutionary population dynamics and multi-objective optimisation problems

    Get PDF
    Griffith Sciences, School of Information and Communication TechnologyFull Tex
    corecore