21 research outputs found

    Scalable and customizable benchmark problems for many-objective optimization

    Get PDF
    Solving many-objective problems (MaOPs) is still a significant challenge in the multi-objective optimization (MOO) field. One way to measure algorithm performance is through the use of benchmark functions (also called test functions or test suites), which are artificial problems with a well-defined mathematical formulation, known solutions and a variety of features and difficulties. In this paper we propose a parameterized generator of scalable and customizable benchmark problems for MaOPs. It is able to generate problems that reproduce features present in other benchmarks and also problems with some new features. We propose here the concept of generative benchmarking, in which one can generate an infinite number of MOO problems, by varying parameters that control specific features that the problem should have: scalability in the number of variables and objectives, bias, deceptiveness, multimodality, robust and non-robust solutions, shape of the Pareto front, and constraints. The proposed Generalized Position-Distance (GPD) tunable benchmark generator uses the position-distance paradigm, a basic approach to building test functions, used in other benchmarks such as Deb, Thiele, Laumanns and Zitzler (DTLZ), Walking Fish Group (WFG) and others. It includes scalable problems in any number of variables and objectives and it presents Pareto fronts with different characteristics. The resulting functions are easy to understand and visualize, easy to implement, fast to compute and their Pareto optimal solutions are known.This work has been supported by the Brazilian agencies (i) National Council for Scientific and Technological Development (CNPq); (ii) Coordination for the Improvement of Higher Education (CAPES) and (iii) Foundation for Research of the State of Minas Gerais (FAPEMIG, in Portuguese)

    Incorporation of region of interest in a decomposition-based multi-objective evolutionary algorithm

    Get PDF
    Preference-based Multi-Objective Evolutionary Algorithm (MOEA) restrict the search to a given region of the Pareto front preferred by the Decision Maker (DM), called the Region of Interest (ROI). In this paper, a new preference-guided MOEA is proposed. In this method, we define the ROI as a preference cone in the objective space. The preferential direction and the aperture of the cone are parameters that the DM has to provide to define the ROI. Given the preference cone, we employ a weight vector generation method that is based on a steady-state evolutionary algorithm. The main idea of our method is to evolve a population of weight vectors towards the characteristics that are desirable for a set of weight vectors in a decomposition-based MOEA framework. The main advantage is that the DM can define the number of weight vectors and thus can control the population size. Once the ROI is defined and the set of weight vectors are generated within the preference cone, we start a decomposition-based MOEA using the provided set of weights in its initialization. Therefore, this enforces the algorithm to converge to the ROI. The results show the benefit and adequacy of the preference cone MOEA/D for preference-guided many-objective optimization.This work was supported by the Brazilian funding agencies CAPES and CNPq

    Query join ordering optimization with evolutionary multi-agent systems.

    No full text
    This work presents an evolutionary multi-agent system applied to the query optimization phase of Relational Database Management Systems (RDBMS) in a non-distributed environment. The query optimization phase deals with a known problem called query join ordering, which has a direct impact on the performance of such systems. The proposed optimizer was programmed in the optimization core of the H2 Database Engine. The experimental section was designed according to a factorial design of fixed effects and the analysis based on the Permutations Test for an Analysis of Variance Design. The evaluation methodology is based on synthetic benchmarks and the tests are divided into three different experiments: calibration of the algorithm, validation with an exhaustive method and a general comparison with different database systems, namely Apache Derby, HSQLDB and PostgreSQL. The results show that the proposed evolutionary multi-agent system was able to generate solutions associated with lower cost plans and faster execution times in the majority of the cases

    Visualization Method for Decision-Making: A Case Study in Bibliometric Analysis

    No full text
    Data and information visualization have drawn an increasingly wide range of interest from several academic fields and industries. Concurrently, exploring a huge set of data to support feasible decisions needs an organized method of Multi-Criteria Decision Making (MCDM). The dramatic increasing of data producing during the past decade makes visualization necessary as a presentation layer on the top of MCDM process. This study aims to propose an integrated strategy to rank the alternatives in the dataset, by combining data, MCDM methods, and visualization layers. In fact, the well designed combination of Information Visualization and MCDM provides a more user-friendly approach than the traditional methods. We investigate a case study in bibliometric analyses, which have become an important dimension and tool for evaluating the impact and performance of researchers, departments, and universities. Hence, finding the best and most reliable papers, authors, and publishers considering diverse criteria is one of the important challenges in science world. Therefore, this text is presenting a new strategy on the bibliometric dataset as a case study and it demonstrates that this strategy can be more meaningful for the end users than the current tools. Finally, the presented simulations illustrate the performance and utilization of this combination. In other words, the researchers of this study could design and implement a tool that overcomes the biggest challenges of data analyzing and ranking via a combination of MCDM and visualization methodologies that can provide a tremendous amount of insight and information from a massive dataset in an efficient way

    Aggregation Trees for visualization and dimension reduction in many-objective optimization.

    No full text
    This paper introduces the concept of Aggregation Trees for the visualization of the results of high-dimensional multi-objective optimization problems, or many-objective problems and as a means of performing dimension reduction. The high dimensionality of manyobjective optimization makes it difficult to represent the relationship between objectives and solutions in such problems and most approaches in the literature are based on the representation of solutions in lower dimensions. The method of Aggregation Trees proposed here is based on an iterative aggregation of objectives that are represented in a tree. The location of conflict is also calculated and represented on the tree. Thus, the tree can represent which objectives and groups of objectives are the most harmonic, what sort of conflict is present between groups of objectives, and which aggregations would be helpful in order to reduce the problem dimension

    A refined multi-seasonality weighted fuzzy time series model for short term load forecasting

    No full text
    Seasonal Auto Regressive Fractionally Integrated Moving Average (SARFIMA) is a well-known model for forecasting of seasonal time series that follow a long memory process. However, to better boost the accuracy of forecasts inside such data for nonlinear problem, in this study, a combination of Fuzzy Time Series (FTS) with SARFIMA is proposed. To build the proposed model, certain parameters requires to be estimated. Therefore, a reliable Evolutionary Algorithm namely Particle Swarm Optimization (PSO) is employed. As a case study, a seasonal long memory time series, i.e., short term load consumption historical data, is selected. In fact, Short Term Load Forecasting (STLF) plays a key role in energy management systems (EMS) and in the decision making process of every power supply organization. In order to evaluate the proposed method, some experiments, using eight datasets of half-hourly load data from England and France for the year 2005 and four data sets of hourly load data from Malaysia for the year 2007, are designed. Although the focus of this research is STLF, six other seasonal long memory time series from several interesting case studies are employed to better evaluate the performance of the proposed method.The results are compared with some novel FTS methods and new state-of-the-art forecasting methods. The analysis of the results indicates that the proposed method presents higher accuracy than its counterparts, representing an efficient hybrid method for load forecasting problems

    Maintenance of generation units coordinated with annual hydrothermal scheduling using a hybrid technique

    Get PDF
    Este artículo presenta una técnica híbrida para resolver la programación del mantenimiento de las unidades de generación coordinado con el despacho hidrotérmico de mediano plazo. La solución se basa en el algoritmo genético de Chu-Beasley y en la técnica de programación lineal. Tiene en cuenta no linealidades derivadas del costo de los combustibles utilizados por las centrales térmicas. La salida del algoritmo genético es una propuesta de la semana de inicio del plan de mantenimiento de cada unidad de generación que minimiza el costo del despacho hidrotérmico. Las dos principales contribuciones de este trabajo son que propone un modelo matemático que coordina dos problemas que en la literatura se han resuelto de forma separada, y que aplica un algoritmo genético especializado que aún no ha sido utilizado para resolver el problema coordinado. El sistema de prueba para validar la metodología se compone de tres centrales hidroeléctricas y dos centrales térmicas dividas en 22 unidades de generación, teniendo en cuenta el mantenimiento preventivo, en un horizonte de planeamiento de un año (52 semanas). Esta metodología combina una técnica exacta con un algoritmo genético especializado, lo que favorece la convergencia

    Short-term load forecasting by using a combined method of convolutional neural networks and fuzzy time series

    No full text
    We propose a combined method that is based on the fuzzy time series (FTS) and convolutional neural networks (CNN) for short-term load forecasting (STLF). Accordingly, in the proposed method, multivariate time series data which include hourly load data, hourly temperature time series and fuzzified version of load time series, was converted into multi-channel images to be fed to a proposed deep learning CNN model with proper architecture. By using images which have been created from the sequenced values of multivariate time series, the proposed CNN model could determine and extract related important parameters, in an implicit and automatic way, without any need for human interaction and expert knowledge, and all by itself. By following this strategy, it was shown how employing the proposed method is easier than some traditional STLF models. Therefore it could be seen as one of the big difference between the proposed method and some state-of-the-art methodologies of STLF. Moreover, using fuzzy logic had great contribution to control over-fitting by expressing one dimension of time series by a fuzzy space, in a spectrum, and a shadow instead of presenting it with exact numbers. Various experiments on test data-sets support the efficiency of the proposed method
    corecore