28 research outputs found

    Multi-Objective Archiving

    Full text link
    Most multi-objective optimisation algorithms maintain an archive explicitly or implicitly during their search. Such an archive can be solely used to store high-quality solutions presented to the decision maker, but in many cases may participate in the search process (e.g., as the population in evolutionary computation). Over the last two decades, archiving, the process of comparing new solutions with previous ones and deciding how to update the archive/population, stands as an important issue in evolutionary multi-objective optimisation (EMO). This is evidenced by constant efforts from the community on developing various effective archiving methods, ranging from conventional Pareto-based methods to more recent indicator-based and decomposition-based ones. However, the focus of these efforts is on empirical performance comparison in terms of specific quality indicators; there is lack of systematic study of archiving methods from a general theoretical perspective. In this paper, we attempt to conduct a systematic overview of multi-objective archiving, in the hope of paving the way to understand archiving algorithms from a holistic perspective of theory and practice, and more importantly providing a guidance on how to design theoretically desirable and practically useful archiving algorithms. In doing so, we also present that archiving algorithms based on weakly Pareto compliant indicators (e.g., epsilon-indicator), as long as designed properly, can achieve the same theoretical desirables as archivers based on Pareto compliant indicators (e.g., hypervolume indicator). Such desirables include the property limit-optimal, the limit form of the possible optimal property that a bounded archiving algorithm can have with respect to the most general form of superiority between solution sets.Comment: 21 pages, 4 figures, journa

    Component-wise Analysis of Automatically Designed Multiobjective Algorithms on Constrained Problems

    Full text link
    The performance of multiobjective algorithms varies across problems, making it hard to develop new algorithms or apply existing ones to new problems. To simplify the development and application of new multiobjective algorithms, there has been an increasing interest in their automatic design from component parts. These automatically designed metaheuristics can outperform their human-developed counterparts. However, it is still uncertain what are the most influential components leading to their performance improvement. This study introduces a new methodology to investigate the effects of the final configuration of an automatically designed algorithm. We apply this methodology to a well-performing Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) designed by the irace package on nine constrained problems. We then contrast the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the hypervolume. Our results indicate that the most influential components were the restart and update strategies, with higher increments in performance and more distinct metric values. Also, their relative influence depends on the problem difficulty: not using the restart strategy was more influential in problems where MOEA/D performs better; while the update strategy was more influential in problems where MOEA/D performs the worst

    MULTI-OBJECTIVE POWER SYSTEM SCHEDULING USING EVOLUTIONARY ALGORITHMS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data structures for non-dominated sets: implementations and empirical assessment of two decades of advances

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordGenetic and Evolutionary Computation Conference (GECCO ’20), 8-12 July 2020, Cancún, MexicoMany data structures have been developed over the last two decades for the storage and efficient update of unconstrained sets of mutually non-dominating solutions. Typically, analysis has been provided in the original works for these data structures in terms of worst/average case complexity performance. Often, however, other aspects such as rebalancing costs of underlying data structures, cache sizes, etc., can also significantly affect behaviour. Empirical performance comparison has often (but not always) been limited to run-time comparison with a basic linear list. No comprehensive comparison between the different specialised data structures proposed in the last two decades has thus far been undertaken. We take significant strides in addressing this here. Eight data structures from the literature are implemented within the same overarching open source Java framework. We additionally highlight and rectify some errors in published work --- and offer additional efficiency gains. Run-time performances are compared and contrasted, using data sequences embodying a number of different characteristics. We show that in different scenarios different data structures are preferable, and that those with the lowest big O complexity are not always the best performing. We also find that performance profiles can vary drastically with computational architecture, in a non-linear fashion.Engineering and Physical Sciences Research Council (EPSRC)Innovate U

    Bio-inspired optimization algorithms for multi-objective problems

    Get PDF
    Orientador : Aurora Trinidad Ramirez PozoCoorientador : Roberto Santana HermidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 06/03/2017Inclui referências : f. 161-72Área de concentração : Computer ScienceResumo: Problemas multi-objetivo (MOPs) são caracterizados por terem duas ou mais funções objetivo a serem otimizadas simultaneamente. Nestes problemas, a meta é encontrar um conjunto de soluções não-dominadas geralmente chamado conjunto ótimo de Pareto cuja imagem no espaço de objetivos é chamada frente de Pareto. MOPs que apresentam mais de três funções objetivo a serem otimizadas são conhecidos como problemas com muitos objetivos (MaOPs) e vários estudos indicam que a capacidade de busca de algoritmos baseados em Pareto é severamente deteriorada nesses problemas. O desenvolvimento de otimizadores bio-inspirados para enfrentar MOPs e MaOPs é uma área que vem ganhando atenção na comunidade, no entanto, existem muitas oportunidades para inovar. O algoritmo de enxames de partículas multi-objetivo (MOPSO) é um dos algoritmos bio-inspirados adequados para ser modificado e melhorado, principalmente devido à sua simplicidade, flexibilidade e bons resultados. Para melhorar a capacidade de busca de MOPSOs, seguimos duas linhas de pesquisa diferentes: A primeira foca em métodos de líder e arquivamento. Trabalhos anteriores apontaram que esses componentes podem influenciar no desempenho do algoritmo, porém a seleção desses componentes pode ser dependente do problema. Uma alternativa para selecioná-los dinamicamente é empregando hiper-heurísticas. Ao combinar hiper-heurísticas e MOPSO, desenvolvemos um novo framework chamado H-MOPSO. A segunda linha de pesquisa também é baseada em trabalhos anteriores do grupo que focam em múltiplos enxames. Isso é feito selecionando como base o framework multi-enxame iterado (I-Multi), cujo procedimento de busca pode ser dividido em busca de diversidade e busca com múltiplos enxames, e a última usa agrupamento para dividir um enxame em vários sub-enxames. Para melhorar o desempenho do I-Multi, exploramos duas possibilidades: a primeira foi investigar o efeito de diferentes características do mecanismo de agrupamento do I-Multi. A segunda foi investigar alternativas para melhorar a convergência de cada sub-enxame, como hibridizá-lo com um algoritmo de estimativa de distribuição (EDA). Este trabalho com EDA aumentou nosso interesse nesta abordagem, portanto seguimos outra linha de pesquisa, investigando alternativas para criar versões multi-objetivo de um dos EDAs mais poderosos da literatura, chamado estratégia de evolução baseada na adaptação da matriz de covariância (CMA-ES). Para validar o nosso trabalho, vários estudos empíricos foram conduzidos para investigar a capacidade de busca das abordagens propostas. Em todos os estudos, nossos algoritmos investigados alcançaram resultados competitivos ou melhores do que algoritmos bem estabelecidos da literatura. Palavras-chave: multi-objetivo, algoritmo de estimativa de distribuição, otimização por enxame de partículas, multiplos enxames, híper-heuristicas.Abstract: Multi-Objective Problems (MOPs) are characterized by having two or more objective functions to be simultaneously optimized. In these problems, the goal is to find a set of non-dominated solutions usually called Pareto optimal set whose image in the objective space is called Pareto front. MOPs presenting more than three objective functions to be optimized are known as Many-Objective Problems (MaOPs) and several studies indicate that the search ability of Pareto-based algorithms is severely deteriorated in such problems. The development of bio-inspired optimizers to tackle MOPs and MaOPs is a field that has been gaining attention in the community, however there are many opportunities to innovate. Multi-objective Particle Swarm Optimization (MOPSO) is one of the bio-inspired algorithms suitable to be modified and improved, mostly due to its simplicity, flexibility and good results. To enhance the search ability of MOPSOs, we followed two different research lines: The first focus on leader and archiving methods. Previous works have pointed that these components can influence the algorithm performance, however the selection of these components can be problem-dependent. An alternative to dynamically select them is by employing hyper-heuristics. By combining hyper-heuristics and MOPSO, we developed a new framework called H-MOPSO. The second research line, is also based on previous works of the group that focus on multi-swarm. This is done by selecting as base framework the iterated multi swarm (I-Multi) algorithm, whose search procedure can be divided into diversity and multi-swarm searches, and the latter employs clustering to split a swarm into several sub-swarms. In order to improve the performance of I-Multi, we explored two possibilities: the first was to further investigate the effect of different characteristics of the clustering mechanism of I-Multi. The second was to investigate alternatives to improve the convergence of each sub-swarm, like hybridizing it to an Estimation of Distribution Algorithm (EDA). This work on EDA increased our interest in this approach, hence we followed another research line by investigating alternatives to create multi-objective versions of one of the most powerful EDAs from the literature, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). In order to validate our work, several empirical studies were conducted to investigate the search ability of the approaches proposed. In all studies, our investigated algorithms have reached competitive or better results than well established algorithms from the literature. Keywords: multi-objective, estimation of distribution algorithms, particle swarm optimization, multi-swarm, hyper-heuristics

    Scalable multi-objective optimization

    Get PDF
    This thesis is concerned with the three open in multi-objective optimization: (i) the development of strategies for dealing with problems with many objective functions; (ii) the comprehension and solution of the model-building issues of current MOEDAs, and; (iii) the formulation of stopping criteria for multi-objective optimizers. We argue about what elements of MOEDAs should be modified in order to achieve a substantial improvement on their performance and scalability. However, in order to supply a solid ground for that discussion, some other elements are to be discussed as well. In particular, this thesis: sketches the supporting theoretical corpus and the fundamentals of MOEA and MOEDA algorithms; analyzes the scalability issue of MOEAs from both theoretical and experimental points of view; discusses the possible directions of improvement for MOEAs’ scalability, presenting the current trends of research; gives reasons of why EDAs can be used as a foundation for achieving a sizable improvement with regard to the scalability issue; examines the model-building issue in depth, hypothesizing on how it affects MOEDAs performance; proposes a novel model-building algorithm, the model-building growing neural gas (MBGNG), which fulfill the requirements for a new approach derived from the previous debate, and; introduces a novel MOEDA, the multi-objective neural EDA, that is constructed using MB-GNG as foundation. The formulation of an strategy for stopping multi-objective optimizers became obvious and necessary as this thesis was developed. The lack of an adequate stopping criterion made the rendered any experimentation that had to do with many objectives a rather cumbersome task. That is why it was compulsory to deal with this issue in order to proceed with further studies. In this regard, the thesis: provides an updated and exhaustive state-of-the-art of this matter; examines the properties and characteristics that a given stopping criterion should exhibit; puts forward a new stopping criterion, denominated MGBM, after the authors last names, that has a small computational footprint, and; experimentally validates MGBM in a set of experiments. Theoretical discussions and algorithm proposals are experimentally contrasted with current state-of-the-art approaches when required. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------Muchas actividades humanas están relacionadas con la elaboración de artefactos cuyas características, organización y/o costes de producción, etc., se deben ajustar en la manera más eficiente posible. Este hecho ha creado la necesidad de tener herramientas matemáticas y computacionales capaces de tratar estos problemas, lo cual ha impulsado el desarrollo de distintas áreas de investigación interrelacionadas, como, por ejemplo, la optimización, programación matemática, investigación de operaciones, etc. El concepto de optimización se puede formular en términos matemáticos como el proceso de buscar una o más soluciones factibles que se correspondan con los valores extremos de una o varias funciones. La mayor parte de los problemas de optimización reales implican la optimización de más de una función a la vez. Esta clase de problemas se conoce como problemas de optimización multi-objetivo (POM). Existe una clase de POM que es particularmente atractivo debido a su complejidad inherente: los denominados problemas de muchos objetivos. Estos son problemas con un número relativamente elevado de funciones objetivo. Numerosos experimentos han mostrado que los métodos “tradicionales” no logran un desempeño adecuado debido a la relación intensamente exponencial entre la dimensión del conjunto objetivo y la cantidad de recursos requeridos para resolver el problema correctamente. Estos problemas tienen una naturaleza poco intuitiva y, en particular, sus soluciones son difíciles de visualizar por un tomador de decisiones humano. Sin embargo, son bastante comunes en la práctica (Stewart et al., 2008). La optimización multi-objetivo ha recibido una importante atención por parte de la comunidad dedicada a los algoritmos evolutivos (Coello Coello et al., 2007). Sin embargo, se ha hecho patente la necesidad de buscar alternativas para poder tratar con los problemas de muchos objetivos. Los algoritmos de estimación de distribución (EDAs, por sus siglas en inglés) (Lozano et al., 2006) son buenos candidatos para esa tarea. Estos algoritmos se han presentado como una revolución en el campo de la computación evolutiva. Ellos sustituyen la aplicación de operadores inspirados en la selección natural por la síntesis de un modelo estadístico. Este modelo es muestreado para generar nuevos elementos y así proseguir con la búsqueda de soluciones. Sin embargo, los EDAs multi-objetivo (MOEDAs) no han logrado cumplir las expectativas creadas a priori. El leit motif de esta tesis se puede resumir en que la causa principal del bajo rendimiento MOEDAs se debe a los algoritmos de aprendizaje automático que se aplican en la construcción de modelos estadísticos. Los trabajos existentes hasta el momento han tomado una aproximación de “caja negra” al problema de la construcción de modelos. Por esa razón, se aplican métodos de aprendizaje automático ya existentes sin modificación alguna, sin percatarse que el problema de la construcción de modelos para EDAs tiene unos requisitos propios que en varios casos son contradictorios con el contexto original de aplicación de los mencionados algoritmos. En particular, hay propiedades compartidas por la mayoría de los enfoques de aprendizaje automático que podrían evitar la obtención de una mejora sustancial en el resultado de los MOEDAs. Ellas son: el tratamiento incorrecto de los valores atípicos (outliers) en el conjunto de datos; tendencia a la pérdida de la diversidad de la población, y; exceso de esfuerzo computacional dedicado a la búsqueda de un modelo óptimo. Estos problemas, aunque ya están presentes en los EDAs de un solo objetivo, se hacen patentes al escalar a problemas de varios objetivos y, en particular, a muchos objetivos. Además, con el aumento de la cantidad de objetivos con frecuencia esta situación se ve agravada por las consecuencias de la “maldición de la dimensionalidad”. La cuestión de los valores atípicos en los datos es un buen ejemplo de como la comunidad no ha notado esta diferencia. En el contexto tradicional del aprendizaje automático los valores extremos son considerados como datos ruidosos o irrelevantes y, por tanto, deben ser evitados. Sin embargo, los valores atípicos en los datos de la construcción de modelos representan las regiones recién descubiertas o soluciones candidatas del conjunto de decisión y por lo tanto deben ser explorados. En este caso, los casos aislados debe ser al menos igualmente representados por el modelo con respecto a los que están formando grupos. Sobre la base de estos razonamientos se estructuran los principales resultados obtenidos en el desarrollo de la tesis. A continuación se enumeran brevemente los mismos mencionando las referencias principales de los mismos. Comprensión del problema de la construcción de modelos en MOEDAs (Martí et al., 2010a, 2008b, 2009c). Se analiza que los EDAs han asumido incorrectamente que la construcción de modelos es un problema tradicional de aprendizaje automático. En el trabajo se muestra experimentalmente la hipótesis. Growing Neural Gas: una alternativa viable para construcción de modelos (Martí et al., 2008c). Se propone el Model-Building Growing Neural Gas network (MB-GNG), una modificación de las redes neuronales tipo Growing Neural Gas. MB-GNG tiene las propiedades requeridas para tratar correctamente la construcción de modelos. MONEDA: mejorando el desempeño de los MOEDAs (Martí et al., 2008a, 2009b, 2010c). El Multi-objective Optimization Neural EDA (MONEDA) fue ideado con el fin de hacer frente a los problemas arriba descritos de los MOEDAs y, por lo tanto, mejorar la escalabilidad de los MOEDAs. MONEDA utiliza MB-GNG para la construcción de modelos. Gracias a su algoritmo específico de construcción de modelos, la preservación de las élites de individuos de la población y su mecanismo de sustitución de individuos MONEDA es escalable capaz de resolver POMs continuos de muchos objetivos con un mejor desepeño que algoritmos similares a un coste computacional menor. Esta propuesta fue nominada a mejor trabajo en GECCO’2008. MONEDA en problemas de alta complejidad (Martí et al., 2009d). En este caso se lleva a cabo una amplia experimentación para comprender como las características de MONEDA provocan una mejora en el desempeño del algoritmo, y si sus resultados mejoran los obtenidos de otros enfoques. Se tratan problemas de alta complejidad. Estos experimentos demostraron que MONEDA produce resultados sustancialmente mejores que los algoritmos similares a una menor coste computacional. Nuevos paradigmas de aprendizaje: MARTEDA (Martí et al., 2010d). Si bien MB-GNG y MONEDA mostraron que la vía del tratamiento correcto de la construcción de modelos era una de las formas de obtener mejores resultados, ellos no evadían por completo el punto esencial: el paradigma de aprendizaje empleado. Al combinar un paradigma de aprendizaje automático alternativo, en particular, la Teoría de Resonancia Adaptativa, se trata a este asunto desde su raíz. En este respecto se han obtenido algunos resultados preliminares alentadores. Criterios de parada y convergencia (Martí et al., 2007, 2009a, 2010e). Con la realización de los experimentos anteriores nos percatamos de la falta de de un criterio de parada adecuado y que esta es un área inexplorada en el ámbito de la investigación en algoritmos evolutivos multi-objectivo. Abordamos esta cuestión proponiendo una serie de criterios de parada que se han demostrado efectivos en problemas sintéticos y del mundo real

    Pressure Sensor Placement for Leak Diagnosis under Demand Uncertainty in Water Distribution Systems

    Get PDF
    Leakages in concealed pipes in urban water distribution systems (WDS) can cause losses of up to 25% of potable water supply in municipalities. These losses are not only a tremendous waste of water but also the energy spent to treat and distribute it. Techniques for leak detection and localization in WDS have evolved considerably since the mid-1950s. Among these methods, model-based leak diagnosis methods (MFD) have been extensively studied in the literature, as they are more economical compared to others. MFD methods infer the existence and position of leaks based on continuously monitoring pressure levels in the WDS and comparing these to the expected values obtained from simulating a calibrated hydraulic model of the WDS. In the event of an anomaly (e.g., a leak), the sampled pressure levels (measured by the sensors) should significantly deviate from expected values which are obtained by simulation under an assumed no-leak condition. Although the methodology is efficient in terms of the number of required sensors and operational person-hours, it is at risk of failing to distinguish between the effect of leaks and water demand variations. This is because both leaks and demand fluctuations have a similar change on pressure levels along the network. This study aims to improve the robustness of the MFD method by explicitly considering the uncertainty in the nodal demands across the WDS. The influence of demand uncertainty on nodal pressure is analyzed by generating model-based system responses that are time-variable and conditional on known data (e.g., total demand across the WDS). Monte Carlo methods are used to generate conditional realizations of spatially variable sets of nodal demands such that simulated states match the available observed system states at the time any pressure observation is sampled. After characterizing the distributions of expected nodal pressures under the no-leak condition, a statistical detection test is defined that asserts the existence of a leak based on evidence from comparing the observations with their corresponding distributions. The performance of the proposed detection analysis is then evaluated in response to multiple synthetic leak and no-leak scenarios. To fine-tune the configuration of the detection test design parameters, its performance is evaluated by computing the false positive and false negative rates across the leak and no-leak scenarios. These two metrics are utilized to solve the sensor placement optimization problem as a multi-objective optimization problem. Results in two synthetic WDS case studies show that under the most influential source of uncertainty in WDS modelling (nodal demands), the proposed detection test functions well and multi-objective optimization can lead to robust sensor placement and other valuable insights
    corecore