146 research outputs found

    Efficient Real-Time Hypervolume Estimation with Monotonically Reducing Error

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordThe codebase for this paper is available at https://github.com/fieldsend/hypervolumeThe hypervolume (or S-metric) is a widely used quality measure employed in the assessment of multi- and many-objective evolutionary algorithms. It is also directly integrated as a component in the selection mechanism of some popular optimisers. Exact hypervolume calculation becomes prohibitively expensive in real-time applications as the number of objectives increases and/or the approximation set grows. As such, Monte Carlo (MC) sampling is often used to estimate its value rather than exactly calculating it. This estimation is inevitably subject to error. As standard with Monte Carlo approaches, the standard error decreases with the square root of the number of MC samples. We propose a number of realtime hypervolume estimation methods for unconstrained archives — principally for use in real-time convergence analysis. Furthermore, we show how the number of domination comparisons can be considerably reduced by exploiting incremental properties of the approximated Pareto front. In these methods the estimation error monotonically decreases over time for (i) a capped budget of samples per algorithm generation and (ii) a fixed budget of dedicated computation time per optimiser generation for new MC samples. Results are provided using an illustrative worst-case scenario with rapid archive growth, demonstrating the orders-of-magnitude of speed-up possible.Engineering and Physical Sciences Research Council (EPSRC)Innovate U

    Otimização multi-objetivo envolvendo aproximadores de função via processos gaussianos e algoritmos híbridos que empregam otimização direta do hipervolume

    Get PDF
    Orientador: Fernando José Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O principal propósito desta tese é reduzir a lacuna entre otimização mono-objetivo e multiobjetivo e mostrar que conectar técnicas de lados opostos pode gerar melhores resultados. Para atingir esta meta, nós fornecemos contribuições em três direções. Primeiro, mostra-se a conexão entre otimalidade da perda média e do hipervolume quando avaliando uma única solução, provando limites de otimalidade quando a solução de um é aplicada ao outro. Ademais, uma avaliação do gradiente do hipervolume mostra que ele pode ser interpretado como um caso particular da perda média ponderada, onde os pesos aumentam conforme as perdas associadas aumentam. Levantou-se a hipótese de que isto pode ajudar a treinar modelos de aprendizado de máquina, uma vez que amostras com erro alto também terão peso alto. Um experimento com uma rede neural valida a hipótese, mostrando melhor desempenho. Segundo, avaliaram-se tentativas anteriores de usar otimização do hipervolume baseada em gradiente para resolver problemas multi-objetivo e por que elas falharam. Baseado na análise, foi proposto um algoritmo híbrido que combina otimização evolutiva e baseada em gradiente. Experimentos nas funções de benchmark ZDT mostram melhor desempenho e convergência mais rápida comparado a algoritmos evolutivos de referência. Finalmente, foram apresentadas condições necessárias e suficientes para que uma função descreva uma fronteira de Pareto válida. Com base nestes resultados, adaptou-se um processo Gaussiano para penalizar violações das condições e mostrou-se que ele fornece melhores estimativas do que outros algoritmos de aproximação. Em particular, ele cria uma curva que não viola as restrições tanto quanto algoritmos que não consideram as condições, sendo mais confiável como um indicador de performance. Foi também demonstrado que uma métrica de otimização comum, quando aproximando funções com processos Gaussianos, é uma boa indicadora das regiões que um algoritmo deveria explorar para encontrar a fronteira de ParetoAbstract: The main purpose of this thesis is to bridge the gap between single-objective and multi- objective optimization and to show that connecting techniques from both ends can lead to improved results. To reach this goal, we provide contributions in three directions. First, we show the connection between optimality of a mean loss and the hypervolume when evaluating a single solution, proving optimality bounds when the solution from one is applied to the other. Furthermore, an evaluation of the gradient of the hypervolume shows that it can be interpreted as a particular case of the weighted mean loss, where the weights increase as their associated losses increases. We hypothesize that this can help to train a machine learning model, since samples with high error will also have high weight. An experiment with a neural network validates the hypothesis, showing improved performance. Second, we evaluate previous attempts at using gradient-based hypervolume optimization to solve multi-objective problems and why they have failed. Based on the analysis, we propose a hybrid algorithm that combines gradient-based and evolutionary optimization. Experiments on the benchmark functions ZDT show improved performance and faster convergence compared with reference evolutionary algorithms. Finally, we prove necessary and sufficient conditions for a function to describe a valid Pareto frontier. Based on this result, we adapt a Gaussian process to penalize violation of the conditions and show that it provides better estimates than other approximation algorithms. In particular, it creates a curve that does not violate the constraints as much as done by algorithms that do not consider the restrictions, being a more reliable performance indicator. We also show that a common optimization metric when approximating functions with Gaussian processes is a good indicator of the regions an algorithm should explore to find the Pareto frontierDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica2015/09199-0CAPESFAPES

    Antecipação na tomada de decisão com múltiplos critérios sob incerteza

    Get PDF
    Orientador: Fernando José Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A presença de incerteza em resultados futuros pode levar a indecisões em processos de escolha, especialmente ao elicitar as importâncias relativas de múltiplos critérios de decisão e de desempenhos de curto vs. longo prazo. Algumas decisões, no entanto, devem ser tomadas sob informação incompleta, o que pode resultar em ações precipitadas com consequências imprevisíveis. Quando uma solução deve ser selecionada sob vários pontos de vista conflitantes para operar em ambientes ruidosos e variantes no tempo, implementar alternativas provisórias flexíveis pode ser fundamental para contornar a falta de informação completa, mantendo opções futuras em aberto. A engenharia antecipatória pode então ser considerada como a estratégia de conceber soluções flexíveis as quais permitem aos tomadores de decisão responder de forma robusta a cenários imprevisíveis. Essa estratégia pode, assim, mitigar os riscos de, sem intenção, se comprometer fortemente a alternativas incertas, ao mesmo tempo em que aumenta a adaptabilidade às mudanças futuras. Nesta tese, os papéis da antecipação e da flexibilidade na automação de processos de tomada de decisão sequencial com múltiplos critérios sob incerteza é investigado. O dilema de atribuir importâncias relativas aos critérios de decisão e a recompensas imediatas sob informação incompleta é então tratado pela antecipação autônoma de decisões flexíveis capazes de preservar ao máximo a diversidade de escolhas futuras. Uma metodologia de aprendizagem antecipatória on-line é então proposta para melhorar a variedade e qualidade dos conjuntos futuros de soluções de trade-off. Esse objetivo é alcançado por meio da previsão de conjuntos de máximo hipervolume esperado, para a qual as capacidades de antecipação de metaheurísticas multi-objetivo são incrementadas com rastreamento bayesiano em ambos os espaços de busca e dos objetivos. A metodologia foi aplicada para a obtenção de decisões de investimento, as quais levaram a melhoras significativas do hipervolume futuro de conjuntos de carteiras financeiras de trade-off avaliadas com dados de ações fora da amostra de treino, quando comparada a uma estratégia míope. Além disso, a tomada de decisões flexíveis para o rebalanceamento de carteiras foi confirmada como uma estratégia significativamente melhor do que a de escolher aleatoriamente uma decisão de investimento a partir da fronteira estocástica eficiente evoluída, em todos os mercados artificiais e reais testados. Finalmente, os resultados sugerem que a antecipação de opções flexíveis levou a composições de carteiras que se mostraram significativamente correlacionadas com as melhorias observadas no hipervolume futuro esperado, avaliado com dados fora das amostras de treinoAbstract: The presence of uncertainty in future outcomes can lead to indecision in choice processes, especially when eliciting the relative importances of multiple decision criteria and of long-term vs. near-term performance. Some decisions, however, must be taken under incomplete information, what may result in precipitated actions with unforeseen consequences. When a solution must be selected under multiple conflicting views for operating in time-varying and noisy environments, implementing flexible provisional alternatives can be critical to circumvent the lack of complete information by keeping future options open. Anticipatory engineering can be then regarded as the strategy of designing flexible solutions that enable decision makers to respond robustly to unpredictable scenarios. This strategy can thus mitigate the risks of strong unintended commitments to uncertain alternatives, while increasing adaptability to future changes. In this thesis, the roles of anticipation and of flexibility on automating sequential multiple criteria decision-making processes under uncertainty are investigated. The dilemma of assigning relative importances to decision criteria and to immediate rewards under incomplete information is then handled by autonomously anticipating flexible decisions predicted to maximally preserve diversity of future choices. An online anticipatory learning methodology is then proposed for improving the range and quality of future trade-off solution sets. This goal is achieved by predicting maximal expected hypervolume sets, for which the anticipation capabilities of multi-objective metaheuristics are augmented with Bayesian tracking in both the objective and search spaces. The methodology has been applied for obtaining investment decisions that are shown to significantly improve the future hypervolume of trade-off financial portfolios for out-of-sample stock data, when compared to a myopic strategy. Moreover, implementing flexible portfolio rebalancing decisions was confirmed as a significantly better strategy than to randomly choosing an investment decision from the evolved stochastic efficient frontier in all tested artificial and real-world markets. Finally, the results suggest that anticipating flexible choices has lead to portfolio compositions that are significantly correlated with the observed improvements in out-of-sample future expected hypervolumeDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications

    Fault-tolerant Stochastic Distributed Systems

    Get PDF
    The present doctoral thesis discusses the design of fault-tolerant distributed systems, placing emphasis in addressing the case where the actions of the nodes or their interactions are stochastic. The main objective is to detect and identify faults to improve the resilience of distributed systems to crash-type faults, as well as detecting the presence of malicious nodes in pursuit of exploiting the network. The proposed analysis considers malicious agents and computational solutions to detect faults. Crash-type faults, where the affected component ceases to perform its task, are tackled in this thesis by introducing stochastic decisions in deterministic distributed algorithms. Prime importance is placed on providing guarantees and rates of convergence for the steady-state solution. The scenarios of a social network (state-dependent example) and consensus (time- dependent example) are addressed, proving convergence. The proposed algorithms are capable of dealing with packet drops, delays, medium access competition, and, in particular, nodes failing and/or losing network connectivity. The concept of Set-Valued Observers (SVOs) is used as a tool to detect faults in a worst-case scenario, i.e., when a malicious agent can select the most unfavorable sequence of communi- cations and inject a signal of arbitrary magnitude. For other types of faults, it is introduced the concept of Stochastic Set-Valued Observers (SSVOs) which produce a confidence set where the state is known to belong with at least a pre-specified probability. It is shown how, for an algorithm of consensus, it is possible to exploit the structure of the problem to reduce the computational complexity of the solution. The main result allows discarding interactions in the model that do not contribute to the produced estimates. The main drawback of using classical SVOs for fault detection is their computational burden. By resorting to a left-coprime factorization for Linear Parameter-Varying (LPV) systems, it is shown how to reduce the computational complexity. By appropriately selecting the factorization, it is possible to consider detectable systems (i.e., unobservable systems where the unobservable component is stable). Such a result plays a key role in the domain of Cyber-Physical Systems (CPSs). These techniques are complemented with Event- and Self-triggered sampling strategies that enable fewer sensor updates. Moreover, the same triggering mechanisms can be used to make decisions of when to run the SVO routine or resort to over-approximations that temporarily compromise accuracy to gain in performance but maintaining the convergence characteristics of the set-valued estimates. A less stringent requirement for network resources that is vital to guarantee the applicability of SVO-based fault detection in the domain of Networked Control Systems (NCSs)

    Datacenter management for on-site intermittent and uncertain renewable energy sources

    Get PDF
    Les technologies de l'information et de la communication sont devenues, au cours des dernières années, un pôle majeur de consommation énergétique avec les conséquences environnementales associées. Dans le même temps, l'émergence du Cloud computing et des grandes plateformes en ligne a causé une augmentation en taille et en nombre des centres de données. Pour réduire leur impact écologique, alimenter ces centres avec des sources d'énergies renouvelables (EnR) apparaît comme une piste de solution. Cependant, certaines EnR telles que les énergies solaires et éoliennes sont liées aux conditions météorologiques, et sont par conséquent intermittentes et incertaines. L'utilisation de batteries ou d'autres dispositifs de stockage est souvent envisagée pour compenser ces variabilités de production. De par leur coût important, économique comme écologique, ainsi que les pertes énergétiques engendrées, l'utilisation de ces dispositifs sans intégration supplémentaire est insuffisante. La consommation électrique d'un centre de données dépend principalement de l'utilisation des ressources de calcul et de communication, qui est déterminée par la charge de travail et les algorithmes d'ordonnancement utilisés. Pour utiliser les EnR efficacement tout en préservant la qualité de service du centre, une gestion coordonnée des ressources informatiques, des sources électriques et du stockage est nécessaire. Il existe une grande diversité de centres de données, ayant différents types de matériel, de charge de travail et d'utilisation. De la même manière, suivant les EnR, les technologies de stockage et les objectifs en termes économiques ou environnementaux, chaque infrastructure électrique est modélisée et gérée différemment des autres. Des travaux existants proposent des méthodes de gestion d'EnR pour des couples bien spécifiques de modèles électriques et informatiques. Cependant, les multiples combinaisons de ces deux parties rendent difficile l'extrapolation de ces approches et de leurs résultats à des infrastructures différentes. Cette thèse explore de nouvelles méthodes pour résoudre ce problème de coordination. Une première contribution reprend un problème d'ordonnancement de tâches en introduisant une abstraction des sources électriques. Un algorithme d'ordonnancement est proposé, prenant les préférences des sources en compte, tout en étant conçu pour être indépendant de leur nature et des objectifs de l'infrastructure électrique. Une seconde contribution étudie le problème de planification de l'énergie d'une manière totalement agnostique des infrastructures considérées. Les ressources informatiques et la gestion de la charge de travail sont encapsulées dans une boîte noire implémentant un ordonnancement sous contrainte de puissance. La même chose s'applique pour le système de gestion des EnR et du stockage, qui agit comme un algorithme d'optimisation d'engagement de sources pour répondre à une demande. Une optimisation coopérative et multiobjectif, basée sur un algorithme évolutionnaire, utilise ces deux boîtes noires afin de trouver les meilleurs compromis entre les objectifs électriques et informatiques. Enfin, une troisième contribution vise les incertitudes de production des EnR pour une infrastructure plus spécifique. En utilisant une formulation en processus de décision markovien (MDP), la structure du problème de décision sous-jacent est étudiée. Pour plusieurs variantes du problème, des méthodes sont proposées afin de trouver les politiques optimales ou des approximations de celles-ci avec une complexité raisonnable.In recent years, information and communication technologies (ICT) became a major energy consumer, with the associated harmful ecological consequences. Indeed, the emergence of Cloud computing and massive Internet companies increased the importance and number of datacenters around the world. In order to mitigate economical and ecological cost, powering datacenters with renewable energy sources (RES) began to appear as a sustainable solution. Some of the commonly used RES, such as solar and wind energies, directly depends on weather conditions. Hence they are both intermittent and partly uncertain. Batteries or other energy storage devices (ESD) are often considered to relieve these issues, but they result in additional energy losses and are too costly to be used alone without more integration. The power consumption of a datacenter is closely tied to the computing resource usage, which in turn depends on its workload and on the algorithms that schedule it. To use RES as efficiently as possible while preserving the quality of service of a datacenter, a coordinated management of computing resources, electrical sources and storage is required. A wide variety of datacenters exists, each with different hardware, workload and purpose. Similarly, each electrical infrastructure is modeled and managed uniquely, depending on the kind of RES used, ESD technologies and operating objectives (cost or environmental impact). Some existing works successfully address this problem by considering a specific couple of electrical and computing models. However, because of this combined diversity, the existing approaches cannot be extrapolated to other infrastructures. This thesis explores novel ways to deal with this coordination problem. A first contribution revisits batch tasks scheduling problem by introducing an abstraction of the power sources. A scheduling algorithm is proposed, taking preferences of electrical sources into account, though designed to be independent from the type of sources and from the goal of the electrical infrastructure (cost, environmental impact, or a mix of both). A second contribution addresses the joint power planning coordination problem in a totally infrastructure-agnostic way. The datacenter computing resources and workload management is considered as a black-box implementing a scheduling under variable power constraint algorithm. The same goes for the electrical sources and storage management system, which acts as a source commitment optimization algorithm. A cooperative multiobjective power planning optimization, based on a multi-objective evolutionary algorithm (MOEA), dialogues with the two black-boxes to find the best trade-offs between electrical and computing internal objectives. Finally, a third contribution focuses on RES production uncertainties in a more specific infrastructure. Based on a Markov Decision Process (MDP) formulation, the structure of the underlying decision problem is studied. For several variants of the problem, tractable methods are proposed to find optimal policies or a bounded approximation

    Wind turbine blade geometry design based on multi-objective optimization using metaheuristics

    Get PDF
    Abstract: The application of Evolutionary Algorithms (EAs) to wind turbine blade design can be interesting, by reducing the number of aerodynamic-to-structural design loops in the conventional design process, hence reducing the design time and cost. Recent developments showed satisfactory results with this approach, mostly combining Genetic Algorithms (GAs) with the Blade Element Momentum (BEM) theory. The general objective of the present work is to define and evaluate a design methodology for the rotor blade geometry in order to maximize the energy production of wind turbines and minimize the mass of the blade itself, using for that purpose stochastic multi-objective optimization methods. Therefore, the multi-objective optimization problem and its constraints were formulated, and the vector representation of the optimization parameters was defined. An optimization benchmark problem was proposed, which represents the wind conditions and present wind turbine concepts found in Brazil. This problem was used as a test-bed for the performance comparison of several metaheuristics, and also for the validation of the defined design methodology. A variable speed pitch-controlled 2.5 MW Direct-Drive Synchronous Generator (DDSG) turbine with a rotor diameter of 120 m was chosen as concept. Five different Multi-objective Evolutionary Algorithms (MOEAs) were selected for evaluation in solving this benchmark problem: Non-dominated Sorting Genetic Algorithm version II (NSGA-II), Quantum-inspired Multi-objective Evolutionary Algorithm (QMEA), two approaches of the Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D), and Multi-objective Optimization Differential Evolution Algorithm (MODE). The results have shown that the two best performing techniques in this type of problem are NSGA-II and MOEA/D, one having more spread and evenly spaced solutions, and the other having a better convergence in the region of interest. QMEA was the worst MOEA in convergence and MODE the worst one in solutions distribution. But the differences in overall performance were slight, because the algorithms have alternated their positions in the evaluation rank of each metric. This was also evident by the fact that the known Pareto Front (PF) consisted of solutions from several techniques, with each dominating a different region of the objective space. Detailed analysis of the best blade design showed that the output of the design methodology is feasible in practice, given that flow conditions and operational features of the rotor were as desired, and also that the blade geometry is very smooth and easy to manufacture. Moreover, this geometry is easily exported to a Computer-Aided Design (CAD) or Computer-Aided Engineering (CAE) software. In this way, the design methodology defined by the present work was validated
    corecore