8 research outputs found

    Cellular Genetic Algorithms: Understanding the Behavior of Using Neighborhoods

    Get PDF
    In this paper, we analyze the neighborhood effect in the selection of parents on an evolutionary algorithm. In this line, we compare a cellular genetic algorithm (cGA), which intrinsically uses the neighbor notion in the mating process, with a modified genetic algorithm including the concept of neighborhood in the selection of parents. Additionally, we analyze the neighborhood size considered for the selection of parent, trying to discover if a quasi-optimal size exists. All the analysis is carried out from a traditional analytic sense to a theoretical point of view regarding evolvability measures. The experimental results suggest that the neighbor effect is important in the performance of an evolutionary algorithm and could provide the cGA with higher chances of success in well-known optimization problems. Regarding the neighborhood size, there is an evidence that a range of neighbors of six, plus/minus two, individuals leads to the cGA to perform more efficiently than other considered sizes.Fil: Salto, Carolina. Universidad Nacional de La Pampa. Facultad de Ingeniería; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Confluencia; ArgentinaFil: Alba, Enrique. Universidad de Málaga; Españ

    Studies in particle swarm optimization technique for global optimization.

    Get PDF
    Ph. D. University of KwaZulu-Natal, Durban 2013.Abstract available in the digital copy.Articles found within the main body of the thesis in the print version is found at the end of the thesis in the digital version

    Distributed and Lightweight Meta-heuristic Optimization method for Complex Problems

    Get PDF
    The world is becoming more prominent and more complex every day. The resources are limited and efficiently use them is one of the most requirement. Finding an Efficient and optimal solution in complex problems needs to practical methods. During the last decades, several optimization approaches have been presented that they can apply to different optimization problems, and they can achieve different performance on various problems. Different parameters can have a significant effect on the results, such as the type of search spaces. Between the main categories of optimization methods (deterministic and stochastic methods), stochastic optimization methods work more efficient on big complex problems than deterministic methods. But in highly complex problems, stochastic optimization methods also have some issues, such as execution time, convergence to local optimum, incompatible with distributed systems, and dependence on the type of search spaces. Therefore this thesis presents a distributed and lightweight metaheuristic optimization method (MICGA) for complex problems focusing on four main tracks. 1) The primary goal is to improve the execution time by MICGA. 2) The proposed method increases the stability and reliability of the results by using the multi-population strategy in the second track. 3) MICGA is compatible with distributed systems. 4) Finally, MICGA is applied to the different type of optimization problems with other kinds of search spaces (continuous, discrete and order based optimization problems). MICGA has been compared with other efficient optimization approaches. The results show the proposed work has been achieved enough improvement on the main issues of the stochastic methods that are mentioned before.Maailmasta on päivä päivältä tulossa yhä monimutkaisempi. Resurssit ovat rajalliset, ja siksi niiden tehokas käyttö on erittäin tärkeää. Tehokkaan ja optimaalisen ratkaisun löytäminen monimutkaisiin ongelmiin vaatii tehokkaita käytännön menetelmiä. Viime vuosikymmenien aikana on ehdotettu useita optimointimenetelmiä, joilla jokaisella on vahvuutensa ja heikkoutensa suorituskyvyn ja tarkkuuden suhteen erityyppisten ongelmien ratkaisemisessa. Parametreilla, kuten hakuavaruuden tyypillä, voi olla merkittävä vaikutus tuloksiin. Optimointimenetelmien pääryhmistä (deterministiset ja stokastiset menetelmät) stokastinen optimointi toimii suurissa monimutkaisissa ongelmissa tehokkaammin kuin deterministinen optimointi. Erittäin monimutkaisissa ongelmissa stokastisilla optimointimenetelmillä on kuitenkin myös joitain ongelmia, kuten korkeat suoritusajat, päätyminen paikallisiin optimipisteisiin, yhteensopimattomuus hajautetun toteutuksen kanssa ja riippuvuus hakuavaruuden tyypistä. Tämä opinnäytetyö esittelee hajautetun ja kevyen metaheuristisen optimointimenetelmän (MICGA) monimutkaisille ongelmille keskittyen neljään päätavoitteeseen: 1) Ensisijaisena tavoitteena on pienentää suoritusaikaa MICGA:n avulla. 2) Lisäksi ehdotettu menetelmä lisää tulosten vakautta ja luotettavuutta käyttämällä monipopulaatiostrategiaa. 3) MICGA tukee hajautettua toteutusta. 4) Lopuksi MICGA-menetelmää sovelletaan erilaisiin optimointiongelmiin, jotka edustavat erityyppisiä hakuavaruuksia (jatkuvat, diskreetit ja järjestykseen perustuvat optimointiongelmat). Työssä MICGA-menetelmää verrataan muihin tehokkaisiin optimointimenetelmiin. Tulokset osoittavat, että ehdotetulla menetelmällä saavutetaan selkeitä parannuksia yllä mainittuihin stokastisten menetelmien pääongelmiin liittyen

    An Algorithm for Evolving Protocol Constraints

    Get PDF
    Centre for Intelligent Systems and their ApplicationsWe present an investigation into the design of an evolutionary mechanism for multiagent protocol constraint optimisation. Starting with a review of common population based mechanisms we discuss the properties of the mechanisms used by these search methods. We derive a novel algorithm for optimisation of vectors of real numbers and empirically validate the efficacy of the design by comparing against well known results from the literature. We discuss the application of an optimiser to a novel problem and remark upon the relevance of the no free lunch theorem. We show the relative performance of the optimiser is strong and publish details of a new best result for the Keane optimisation problem. We apply the final algorithm to the multi-agent protocol optimisation problem and show the design process was successful

    Transcriptional profiling of the bax-responsive genes in Saccharomyces cerevisiae

    Get PDF

    Hybrid PSO6 for Hard Continuous Optimization

    No full text
    In our previous works, we empirically showed that a number of 6±2 informants may endow particle swarm optimization (PSO) with an optimized learning procedure in comparison with other combinations of informants. In this way, the new version PSO6, that evolves new particles from six informants (neighbors), performs more accurately that other existing versions of PSO and is able to generate good particles for a longer time. Despite this advantage, PSO6 may show certain attraction to local basins derived from its moderate performance on non-separable complex problems (typically observed in PSO versions). In this paper, we incorporate a local search procedure to the PSO6 with the aim of correcting this disadvantage. We compare the performance of our proposal (PSO6-Mtsls) on a set of 40 benchmark functions against that of other PSO versions, as well as against the best recent proposals in the current state of the art (with and without local search). The results support our conjecture that the (quasi)-optimally informed PSO, hybridized with local search mechanisms, reaches a high rate of success on a large number of complex (non-separable) continuous optimization functions.Junta de Andalucía P07-TIC-03044Ministerio de Ciencia e Innovación TIN2011-28194Ministerio de Ciencia e Innovación BES-2009-01876
    corecore