19 research outputs found

    Solving a type of biobjective bilevel programming problem using NSGA-II

    Get PDF
    AbstractThis paper considers a type of biobjective bilevel programming problem, which is derived from a single objective bilevel programming problem via lifting the objective function at the lower level up to the upper level. The efficient solutions to such a model can be considered as candidates for the after optimization bargaining between the decision-makers at both levels who retain the original bilevel decision-making structure. We use a popular multiobjective evolutionary algorithm, NSGA-II, to solve this type of problem. The algorithm is tested on some small-dimensional benchmark problems from the literature. Computational results show that the NSGA-II algorithm is capable of solving the problems efficiently and effectively. Hence, it provides a promising visualization tool to help the decision-makers find the best trade-off in bargaining

    Evolutionary computation for expensive optimization: a survey

    Get PDF
    Expensive optimization problem (EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation (EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently

    Computational Intelligence Sequential Monte Carlos for Recursive Bayesian Estimation

    Get PDF
    Recursive Bayesian estimation using sequential Monte Carlos methods is a powerful numerical technique to understand latent dynamics of non-linear non-Gaussian dynamical systems. Classical sequential Monte Carlos suffer from weight degeneracy which is where the number of distinct particles collapse. Traditionally this is addressed by resampling, which effectively replaces high weight particles with many particles with high inter-particle correlation. Frequent resampling, however, leads to a lack of diversity amongst the particle set in a problem known as sample impoverishment. Traditional sequential Monte Carlo methods attempt to resolve this correlated problem however introduce further data processing issues leading to minimal to comparable performance improvements over the sequential Monte Carlo particle filter. A new method, the adaptive path particle filter, is proposed for recursive Bayesian estimation of non-linear non-Gaussian dynamical systems. Our method addresses the weight degeneracy and sample impoverishment problem by embedding a computational intelligence step of adaptive path switching between generations based on maximal likelihood as a fitness function. Preliminary tests on a scalar estimation problem with non-linear non-Gaussian dynamics and a non-stationary observation model and the traditional univariate stochastic volatility problem are presented. Building on these preliminary results, we evaluate our adaptive path particle filter on the stochastic volatility estimation problem. We calibrate the Heston stochastic volatility model employing a Markov chain Monte Carlo on six securities. Finally, we investigate the efficacy of sequential Monte Carlos for recursive Bayesian estimation of astrophysical time series. We posit latent dynamics for both regularized and irregular astrophysical time series, calibrating fifty-five quasar time series using the CAR(1) model. We find the adaptive path particle filter to statistically significantly outperform the standard sequential importance resampling particle filter, the Markov chain Monte Carlo particle filter and, upon Heston model estimation, the particle learning algorithm particle filter. In addition, from our quasar MCMC calibration we find the characteristic timescale τ to be first-order stable in contradiction to the literature though indicative of a unified underlying structure. We offer detailed analysis throughout, and conclude with a discussion and suggestions for future work

    Design optimisation of complex space systems under epistemic uncertainty

    Get PDF
    This thesis presents an innovative methodology for System Design Optimisation (SDO) through the framework of Model-Based System Engineering (MBSE) that bridges system modelling, Constrained Global Optimisation (CGO), Uncertainty Quantification (UQ), System Dynamics (SD) and other mathematical tools for the design of Complex Engineered and Engineering Systems (CEdgSs) under epistemic uncertainty. The problem under analysis has analogies with what is nowadays studied as Generative Design under Uncertainty. The method is finally applied to the design of Space Systems which are Complex Engineered Systems (CEdSs) composed of multiple interconnected sub-systems. A critical aspect in the design of Space Systems is the uncertainty involved. Much of the uncertainty is epistemic and is here modelled with Dempster Shafer Theory (DST). Designing space systems is a complex task that involves the coordination of different disciplines and problems. The thesis then proposes a set of building blocks, that is a toolbox of methodologies for the solution of problems which are of interest also if considered independently. It proposes then a holistic framework that couples these building blocks to form a SDO procedure. With regard to the building blocks, the thesis includes a network-based modelling procedure for CEdSs and a generalisation for CEdgSs where the system and the whole design process are both taken into account. Then, it presents a constraint min-max solver as an algorithmic procedures for the solution of the general Optimisation Under Uncertainty (OUU) problem. An extension of the method for the Multi-Objective Problems (MOP) is also proposed in Appendix as a minor result. A side contribution for the optimisation part refers to the extension of the global optimiser Multi Population Adaptive Inflationary Differential Evolution Algorithm (MP-AIDEA) with the introduction of constraint handling and multiple objective functions. The Constraint Multi-Objective Problem (CMOP) solver is however a preliminary result and it is reported in Appendix. Furthermore, the thesis proposes a decomposition methodology for the computational reduction of UQ with DST. As a partial contribution, a second approach based on a Binary Tree decomposition is also reported in Appendix. With regard to the holistic approach, instead, the thesis gives a new dentition and proposes a framework for system network robustness and for system network resilience. It finally presents the framework for the optimisation of the whole design process through the use of a multi-layer network model.This thesis presents an innovative methodology for System Design Optimisation (SDO) through the framework of Model-Based System Engineering (MBSE) that bridges system modelling, Constrained Global Optimisation (CGO), Uncertainty Quantification (UQ), System Dynamics (SD) and other mathematical tools for the design of Complex Engineered and Engineering Systems (CEdgSs) under epistemic uncertainty. The problem under analysis has analogies with what is nowadays studied as Generative Design under Uncertainty. The method is finally applied to the design of Space Systems which are Complex Engineered Systems (CEdSs) composed of multiple interconnected sub-systems. A critical aspect in the design of Space Systems is the uncertainty involved. Much of the uncertainty is epistemic and is here modelled with Dempster Shafer Theory (DST). Designing space systems is a complex task that involves the coordination of different disciplines and problems. The thesis then proposes a set of building blocks, that is a toolbox of methodologies for the solution of problems which are of interest also if considered independently. It proposes then a holistic framework that couples these building blocks to form a SDO procedure. With regard to the building blocks, the thesis includes a network-based modelling procedure for CEdSs and a generalisation for CEdgSs where the system and the whole design process are both taken into account. Then, it presents a constraint min-max solver as an algorithmic procedures for the solution of the general Optimisation Under Uncertainty (OUU) problem. An extension of the method for the Multi-Objective Problems (MOP) is also proposed in Appendix as a minor result. A side contribution for the optimisation part refers to the extension of the global optimiser Multi Population Adaptive Inflationary Differential Evolution Algorithm (MP-AIDEA) with the introduction of constraint handling and multiple objective functions. The Constraint Multi-Objective Problem (CMOP) solver is however a preliminary result and it is reported in Appendix. Furthermore, the thesis proposes a decomposition methodology for the computational reduction of UQ with DST. As a partial contribution, a second approach based on a Binary Tree decomposition is also reported in Appendix. With regard to the holistic approach, instead, the thesis gives a new dentition and proposes a framework for system network robustness and for system network resilience. It finally presents the framework for the optimisation of the whole design process through the use of a multi-layer network model

    Global solution of constrained min-max problems with inflationary differential evolution

    Get PDF
    This paper proposes a method for the solution of constrained min-max problems. The method is tested on a benchmark of representative problems presenting different structures for the objective function and the constraints. The particular min-max problem addressed in this paper finds application in optimisation under uncertainty when the constraints need to be satisfied for all possible realisations of the uncertain quantities. Hence, the algorithm proposed in this paper search for solutions that minimise the worst possible outcome for the objective function due to the uncertainty while satisfying the constraint functions in all possible scenarios. A constraint relaxation and a scalarisation procedure are also introduced to trade-off between objective optimality and constraint satisfaction when no feasible solutions can be found

    Applications of Agent-Based Methods in Multi-Energy Systems—A Systematic Literature Review

    Get PDF
    The need for a greener and more sustainable energy system evokes a need for more extensive energy system transition research. The penetration of distributed energy resources and Internet of Things technologies facilitate energy system transition towards the next generation of energy system concepts. The next generation of energy system concepts include “integrated energy system”, “multi-energy system”, or “smart energy system”. These concepts reveal that future energy systems can integrate multiple energy carriers with autonomous intelligent decision making. There are noticeable trends in using the agent-based method in research of energy systems, including multi-energy system transition simulation with agent-based modeling (ABM) and multi-energy system management with multi-agent system (MAS) modeling. The need for a comprehensive review of the applications of the agent-based method motivates this review article. Thus, this article aims to systematically review the ABM and MAS applications in multi-energy systems with publications from 2007 to the end of 2021. The articles were sorted into MAS and ABM applications based on the details of agent implementations. MAS application papers in building energy systems, district energy systems, and regional energy systems are reviewed with regard to energy carriers, agent control architecture, optimization algorithms, and agent development environments. ABM application papers in behavior simulation and policy-making are reviewed with regard to the agent decision-making details and model objectives. In addition, the potential future research directions in reinforcement learning implementation and agent control synchronization are highlighted. The review shows that the agent-based method has great potential to contribute to energy transition studies with its plug-and-play ability and distributed decision-making process

    Enhanced clustering analysis pipeline for performance analysis of parallel applications

    Get PDF
    Clustering analysis is widely used to stratify data in the same cluster when they are similar according to the specific metrics. We can use the cluster analysis to group the CPU burst of a parallel application, and the regions on each process in-between communication calls or calls to the parallel runtime. The resulting clusters obtained are the different computational trends or phases that appear in the application. These clusters are useful to understand the behavior of the computation part of the application and focus the analyses on those that present performance issues. Although density-based clustering algorithms are a powerful and efficient tool to summarize this type of information, their traditional user-guided clustering methodology has many shortcomings and deficiencies in dealing with the complexity of data, the diversity of data structures, high-dimensionality of data, and the dramatic increase in the amount of data. Consequently, the majority of DBSCAN-like algorithms have weaknesses to handle high-dimensionality and/or Multi-density data, and they are sensitive to their hyper-parameter configuration. Furthermore, extracting insight from the obtained clusters is an intuitive and manual task. To mitigate these weaknesses, we have proposed a new unified approach to replace the user-guided clustering with an automated clustering analysis pipeline, called Enhanced Cluster Identification and Interpretation (ECII) pipeline. To build the pipeline, we propose novel techniques including Robust Independent Feature Selection, Feature Space Curvature Map, Organization Component Analysis, and hyper-parameters tuning to feature selection, density homogenization, cluster interpretation, and model selection which are the main components of our machine learning pipeline. This thesis contributes four new techniques to the Machine Learning field with a particular use case in Performance Analytics field. The first contribution is a novel unsupervised approach for feature selection on noisy data, called Robust Independent Feature Selection (RIFS). Specifically, we choose a feature subset that contains most of the underlying information, using the same criteria as the Independent component analysis. Simultaneously, the noise is separated as an independent component. The second contribution of the thesis is a parametric multilinear transformation method to homogenize cluster densities while preserving the topological structure of the dataset, called Feature Space Curvature Map (FSCM). We present a new Gravitational Self-organizing Map to model the feature space curvature by plugging the concepts of gravity and fabric of space into the Self-organizing Map algorithm to mathematically describe the density structure of the data. To homogenize the cluster density, we introduce a novel mapping mechanism to project the data from the non-Euclidean curved space to a new Euclidean flat space. The third contribution is a novel topological-based method to study potentially complex high-dimensional categorized data by quantifying their shapes and extracting fine-grain insights from them to interpret the clustering result. We introduce our Organization Component Analysis (OCA) method for the automatic arbitrary cluster-shape study without an assumption about the data distribution. Finally, to tune the DBSCAN hyper-parameters, we propose a new tuning mechanism by combining techniques from machine learning and optimization domains, and we embed it in the ECII pipeline. Using this cluster analysis pipeline with the CPU burst data of a parallel application, we provide the developer/analyst with a high-quality SPMD computation structure detection with the added value that reflects the fine grain of the computation regions.El análisis de conglomerados se usa ampliamente para estratificar datos en el mismo conglomerado cuando son similares según las métricas específicas. Nosotros puede usar el análisis de clúster para agrupar la ráfaga de CPU de una aplicación paralela y las regiones en cada proceso intermedio llamadas de comunicación o llamadas al tiempo de ejecución paralelo. Los clusters resultantes obtenidos son las diferentes tendencias computacionales o fases que aparecen en la solicitud. Estos clusters son útiles para entender el comportamiento de la parte de computación del aplicación y centrar los análisis en aquellos que presenten problemas de rendimiento. Aunque los algoritmos de agrupamiento basados en la densidad son una herramienta poderosa y eficiente para resumir este tipo de información, su La metodología tradicional de agrupación en clústeres guiada por el usuario tiene muchas deficiencias y deficiencias al tratar con la complejidad de los datos, la diversidad de estructuras de datos, la alta dimensionalidad de los datos y el aumento dramático en la cantidad de datos. En consecuencia, el La mayoría de los algoritmos similares a DBSCAN tienen debilidades para manejar datos de alta dimensionalidad y/o densidad múltiple, y son sensibles a su configuración de hiperparámetros. Además, extraer información de los clústeres obtenidos es una forma intuitiva y tarea manual Para mitigar estas debilidades, hemos propuesto un nuevo enfoque unificado para reemplazar el agrupamiento guiado por el usuario con un canalización de análisis de agrupamiento automatizado, llamada canalización de identificación e interpretación de clúster mejorada (ECII). para construir el tubería, proponemos técnicas novedosas que incluyen la selección robusta de características independientes, el mapa de curvatura del espacio de características, Análisis de componentes de la organización y ajuste de hiperparámetros para la selección de características, homogeneización de densidad, agrupación interpretación y selección de modelos, que son los componentes principales de nuestra canalización de aprendizaje automático. Esta tesis aporta cuatro nuevas técnicas al campo de Machine Learning con un caso de uso particular en el campo de Performance Analytics. La primera contribución es un enfoque novedoso no supervisado para la selección de características en datos ruidosos, llamado Robust Independent Feature. Selección (RIFS).Específicamente, elegimos un subconjunto de funciones que contiene la mayor parte de la información subyacente, utilizando el mismo criterios como el análisis de componentes independientes. Simultáneamente, el ruido se separa como un componente independiente. La segunda contribución de la tesis es un método de transformación multilineal paramétrica para homogeneizar densidades de clústeres mientras preservando la estructura topológica del conjunto de datos, llamado Mapa de Curvatura del Espacio de Características (FSCM). Presentamos un nuevo Gravitacional Mapa autoorganizado para modelar la curvatura del espacio característico conectando los conceptos de gravedad y estructura del espacio en el Algoritmo de mapa autoorganizado para describir matemáticamente la estructura de densidad de los datos. Para homogeneizar la densidad del racimo, introducimos un mecanismo de mapeo novedoso para proyectar los datos del espacio curvo no euclidiano a un nuevo plano euclidiano espacio. La tercera contribución es un nuevo método basado en topología para estudiar datos categorizados de alta dimensión potencialmente complejos mediante cuantificando sus formas y extrayendo información detallada de ellas para interpretar el resultado de la agrupación. presentamos nuestro Método de análisis de componentes de organización (OCA) para el estudio automático de forma arbitraria de conglomerados sin una suposición sobre el distribución de datos.Postprint (published version
    corecore