4 research outputs found

    A review of population-based metaheuristics for large-scale black-box global optimization: Part A

    Get PDF
    Scalability of optimization algorithms is a major challenge in coping with the ever growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird鈥檚-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part of the series covers two major algorithmic approaches to large-scale global optimization: problem decomposition and memetic algorithms. Part of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally touches upon the pitfalls and challenges of current research and identifies several potential areas for future research

    Un algoritmo de estimaci贸n de distribuci贸n para solucionar problemas de programaci贸n en ambiente flowshop con bloqueo y con m煤ltiples objetivos

    Get PDF
    La programaci贸n de producci贸n tiene un impacto relevante sobre el uso eficiente de los recursos, reducci贸n de costos y cumplimiento de los objetivos como servicio al cliente, entregas oportunas y satisfacci贸n de la demanda. En un entorno cada vez m谩s competitivo, las organizaciones se ven en la necesidad de aplicar herramientas, procedimientos y estrategias que les permitan estar a la vanguardia. En ese sentido, el uso de las metaheur铆sticas para resolver problemas de programaci贸n y secuenciaci贸n de trabajos va en aumento, ya que se han demostrado sus fortalezas para la b煤squeda de soluciones eficientes, oportunas, r谩pidas y de buena calidad. Adicionalmente, las organizaciones buscan satisfacer o cumplir varios objetivos o metas de manera simult谩nea como entregar a tiempo al m铆nimo costo, entre otros. As铆, se propone y desarrolla un metaheur铆stico de estimaci贸n de distribuci贸n para un ambiente de programaci贸n tipo flowshop con restricciones de bloqueo y con m煤ltiples objetivos. A partir de la experimentaci贸n, se evidencia un adecuado rendimiento del algoritmo en cuanto a las soluciones encontradas y al rendimiento, que no se ve afectado por el n煤mero de trabajos, ni de m谩quinas a considerar en el problemaAbstract: The production scheduling has a relevant impact on the efficient use of resources, reduction of costs and fulfilment of the objectives such as customer service, timely deliveries and demand satisfaction. In an increasingly competitive environment, organizations are in the need for tools, procedures and strategies that allow them to be at the forefront. In this sense, the use of metaheuristics for solving problems of scheduling and sequencing of jobs is increasing, since their strengths aiming to pursuit fast, timely, efficient and of good quality solutions, have been shown. In addition, organizations seek to meet several objectives or goals simultaneously, such as on time and the minimum cost deliveries, among others. Thus, an estimation of distribution metaheuristic for a flowshop scheduling problem with blocking and multiple objectives is proposed and developed. As a result of the experimentation, there is evidence of an appropriate performance of the algorithm in terms of the solutions found and the performance, which is not affected by the number of jobs or machines to be considered in the problemMaestr铆

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework
    corecore