61 research outputs found

    Performance analysis of GA and PBIL variants for real-world location-allocation problems.

    Get PDF
    The Uncapacitated Location-Allocation problem (ULAP) is a major optimisation problem concerning the determination of the optimal location of facilities and the allocation of demand to them. In this paper, we present two novel problem variants of Non-Linear ULAP motivated by a real-world problem from the telecommunication industry: Uncapacitated Location-Allocation Resilience problem (ULARP) and Uncapacitated Location-Allocation Resilience problem with Restrictions (ULARPR). Problem sizes ranging from 16 to 100 facilities by 50 to 10000 demand points are considered. To solve the problems, we explore the components and configurations of four Genetic Algorithms [1], [2], [3] and [4] selected from the ULAP literature. We aim to understand the contribution each choice makes to the GA performance and so hope to design an Optimal GA configuration for the novel problems.We also conduct comparative experiments with Population-Based Incremental Learning (PBIL) Algorithm on ULAP. We show the effectiveness of PBIL and GA with parameter set: random and heuristic initialisation, tournament and fined grained tournament selection, uniform crossover and bitflip mutation in solving the proposed problems

    Generation and optimisation of real-world static and dynamic location-allocation problems with application to the telecommunications industry.

    Get PDF
    The location-allocation (LA) problem concerns the location of facilities and the allocation of demand, to minimise or maximise a particular function such as cost, profit or a measure of distance. Many formulations of LA problems have been presented in the literature to capture and study the unique aspects of real-world problems. However, some real-world aspects, such as resilience, are still lacking in the literature. Resilience ensures uninterrupted supply of demand and enhances the quality of service. Due to changes in population shift, market size, and the economic and labour markets - which often cause demand to be stochastic - a reasonable LA problem formulation should consider some aspect of future uncertainties. Almost all LA problem formulations in the literature that capture some aspect of future uncertainties fall in the domain of dynamic optimisation problems, where new facilities are located every time the environment changes. However, considering the substantial cost associated with locating a new facility, it becomes infeasible to locate facilities each time the environment changes. In this study, we propose and investigate variations of LA problem formulations. Firstly, we develop and study new LA formulations, which extend the location of facilities and the allocation of demand to add a layer of resilience. We apply the population-based incremental learning algorithm for the first time in the literature to solve the new novel LA formulations. Secondly, we propose and study a new dynamic formulation of the LA problem where facilities are opened once at the start of a defined period and are expected to be satisfactory in servicing customers' demands irrespective of changes in customer distribution. The problem is based on the idea that customers will change locations over a defined period and that these changes have to be taken into account when establishing facilities to service changing customers' distributions. Thirdly, we employ a simulation-based optimisation approach to tackle the new dynamic formulation. Owing to the high computational costs associated with simulation-based optimisation, we investigate the concept of Racing, an approach used in model selection, to reduce the high computational cost by employing the minimum number of simulations for solution selection

    Benchmarking a wide spectrum of metaheuristic techniques for the radio network design problem

    Get PDF
    The radio network design (RND) is an NP-hard optimization problem which consists of the maximization of the coverage of a given area while minimizing the base station deployment. Solving RND problems efficiently is relevant to many fields of application and has a direct impact in the engineering, telecommunication, scientific, and industrial areas. Numerous works can be found in the literature dealing with the RND problem, although they all suffer from the same shortfall: a noncomparable efficiency. Therefore, the aim of this paper is twofold: first, to offer a reliable RND comparison base reference in order to cover a wide algorithmic spectrum, and, second, to offer a comprehensible insight into accurate comparisons of efficiency, reliability, and swiftness of the different techniques applied to solve the RND problem. In order to achieve the first aim we propose a canonical RND problem formulation driven by two main directives: technology independence and a normalized comparison criterion. Following this, we have included an exhaustive behavior comparison between 14 different techniques. Finally, this paper indicates algorithmic trends and different patterns that can be observed through this analysis.Publicad

    Reducing the Computational Effort Associated with Evolutionary Optimisation in Single Component Design

    Get PDF
    The dissertation presents innovative Evolutionary Search (ES) methods for the reduction in computational expense associated with the optimisation of highly dimensional design spaces. The objective is to develop a semi-automated system which successfully negotiates complex search spaces. Such a system would be highly desirable to a human designer by providing optimised design solutions in realistic time. The design domain represents a real-world industrial problem concerning the optimal material distribution on the underside of a flat roof tile with varying load and support conditions. The designs utilise a large number of design variables (circa 400). Due to the high computational expense associated with analysis such as finite element for detailed evaluation, in order to produce "good" design solutions within an acceptable period of time, the number of calls to the evaluation model must be kept to a minimum. The objective therefore is to minimise the number of calls required to the analysis tool whilst also achieving an optimal design solution. To minimise the number of model evaluations for detailed shape optimisation several evolutionary algorithms are investigated. The better performing algorithms are combined with multi-level search techniques which have been developed to further reduce the number of evaluations and improve quality of design solutions. Multi-level techniques utilise a number of levels of design representation. The solutions of the coarse representations are injected into the more detailed designs for fine grained refinement. The techniques developed include Dynamic Shape Refinement (DSR), Modified Injection Island Genetic Algorithm (MiiGA) and Dynamic Injection Island Genetic Algorithm (DiiGA). The multi-level techniques are able to handle large numbers of design variables (i.e. > 100). Based on the performance characteristics of the individual algorithms and multi-level search techniques, distributed search techniques are proposed. These techniques utilise different evolutionary strategies in a multi-level environment and were developed as a way of further reducing computational expense and improve design solutions. The results indicate a considerable potential for a significant reduction in the number of evaluation calls during evolutionary search. In general this allows a more efficient integration with computationally intensive analytical techniques during detailed design and contribute significantly to those preliminary stages of the design process where a greater degree of analysis is required to validate results from more simplistic preliminary design models

    Innovative hybrid MOEA/AD variants for solving multi-objective combinatorial optimization problems

    Get PDF
    Orientador : Aurora Trinidad Ramirez PozoCoorientador : Roberto SantanaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 16/12/2016Inclui referências : f. 103-116Resumo: Muitos problemas do mundo real podem ser representados como um problema de otimização combinatória. Muitas vezes, estes problemas são caracterizados pelo grande número de variáveis e pela presença de múltiplos objetivos a serem otimizados ao mesmo tempo. Muitas vezes estes problemas são difíceis de serem resolvidos de forma ótima. Suas resoluções tem sido considerada um desafio nas últimas décadas. Os algoritimos metaheurísticos visam encontrar uma aproximação aceitável do ótimo em um tempo computacional razoável. Os algoritmos metaheurísticos continuam sendo um foco de pesquisa científica, recebendo uma atenção crescente pela comunidade. Uma das têndencias neste cenário é a arbordagem híbrida, na qual diferentes métodos e conceitos são combinados objetivando propor metaheurísticas mais eficientes. Nesta tese, nós propomos algoritmos metaheurísticos híbridos para a solução de problemas combinatoriais multiobjetivo. Os principais ingredientes das nossas propostas são: (i) o algoritmo evolutivo multiobjetivo baseado em decomposição (MOEA/D framework), (ii) a otimização por colônias de formigas e (iii) e os algoritmos de estimação de distribuição. Em nossos frameworks, além dos operadores genéticos tradicionais, podemos instanciar diferentes modelos como mecanismo de reprodução dos algoritmos. Além disso, nós introduzimos alguns componentes nos frameworks objetivando balancear a convergência e a diversidade durante a busca. Nossos esforços foram direcionados para a resolução de problemas considerados difíceis na literatura. São eles: a programação quadrática binária sem restrições multiobjetivo, o problema de programação flow-shop permutacional multiobjetivo, e também os problemas caracterizados como deceptivos. Por meio de estudos experimentais, mostramos que as abordagens propostas são capazes de superar os resultados do estado-da-arte em grande parte dos casos considerados. Mostramos que as diretrizes do MOEA/D hibridizadas com outras metaheurísticas é uma estratégia promissora para a solução de problemas combinatoriais multiobjetivo. Palavras-chave: metaheuristicas, otimização multiobjetivo, problemas combinatoriais, MOEA/D, otimização por colônia de formigas, algoritmos de estimação de distribuição, programação quadrática binária sem restrições multiobjetivo, problema de programação flow-shop permutacional multiobjetivo, abordagens híbridas.Abstract: Several real-world problems can be stated as a combinatorial optimization problem. Very often, they are characterized by the large number of variables and the presence of multiple conflicting objectives to be optimized at the same time. These kind of problems are, usually, hard to be solved optimally, and their solutions have been considered a challenge for a long time. Metaheuristic algorithms aim at finding an acceptable approximation to the optimal solution in a reasonable computational time. The research on metaheuristics remains an attractive area and receives growing attention. One of the trends in this scenario are the hybrid approaches, in which different methods and concepts are combined aiming to propose more efficient approaches. In this thesis, we have proposed hybrid metaheuristic algorithms for solving multi-objective combinatorial optimization problems. Our proposals are based on (i) the multi-objective evolutionary algorithm based on decomposition (MOEA/D framework), (ii) the bio-inspired metaheuristic ant colony optimization, and (iii) the probabilistic models from the estimation of distribution algorithms. Our algorithms are considered MOEA/D variants. In our MOEA/D variants, besides the traditional genetic operators, we can instantiate different models as the variation step (reproduction). Moreover, we include some design modifications into the frameworks to control the convergence and the diversity during their search (evolution). We have addressed some important problems from the literature, e.g., the multi-objective unconstrained binary quadratic programming, the multiobjective permutation flowshop scheduling problem, and the problems characterized by deception. As a result, we show that our proposed frameworks are able to solve these problems efficiently by outperforming the state-of-the-art approaches in most of the cases considered. We show that the MOEA/D guidelines hybridized to other metaheuristic components and concepts is a powerful strategy for solving multi-objective combinatorial optimization problems. Keywords: meta-heuristics, multi-objective optimization, combinatorial problems, MOEA/D, ant colony optimization, estimation of distribution algorithms, unconstrained binary quadratic programming, permutation flowshop scheduling problem, hybrid approaches

    Optimal coalition structure generation on large-scale renewable energy smart grids

    Get PDF
    Most renewable energy sources are dependent on unpredictable weather conditions, which have considerable variation over space and time. The intermittent nature of this production means that any renewable energy prosumer may sometimes produce an amount of energy in excess of its local consumption needs and sometimes in deficiency. This thesis is concerned with developing methods that can improve the effectiveness and widespread adoption of renewable energy usage. In order for renewable energy to be more economically viable, there needs to be a scheme for sharing energy among the prosumers so that those with excess energy can give their excess amounts to those in energy deficiency. That is the task addressed in this thesis. The way to deal with this problem is to setup an optimal arrangement of local coalitions of renewable energy prosumers such that energy is shared within the coalitions in an optimally efficient manner. As is formally explained early on in this work, finding such an optimal coalition arrangement is an example of a Coalition Structure Generation (CSG) problem. The most straightforward way to find an optimal solution for a given pool of prosumer agents in these circumstances is to examine every possible coalition partition (coalition structure) and evaluate its comparative utility. This is known as ``exhaustive search'' (ES) and can be computationally expensive. As has been shown earlier, the number of such evaluations in ES even for a pool of twenty agents can be in the tens of trillions. The problem for us in the renewable energy domain is that, because of the constantly changing weather conditions among the scattered prosumers, the CSG optimization calculation must be carried out every hour of the day. This means that the ES approach in the CSG optimization calculation for a reasonable number of prosumer agents is computationally intractable. So a more computationally feasible stochastic optimization method must be used, which searches through the coalition structure search space in order to find a reasonably good solution even if it is not the global optimum. To this end, a number of stochastic optimization search methods have been investigated in this thesis, including some of our own novel extensions to existing approaches. These search methods have been examined with respect to two different connection arrangements with respect to the outside world – (1) when the local prosumer networks have a connection to a public utility power grid and can therefore buy needed energy (at a high price) from the grid and sell excess energy (at a low price) to the grid and (2) when the local prosumer networks are isolated from any public utility, which is referred to as ``island mode''. The overall goal of these investigations has been to find an optimization approach that arrives at a near-optimal (near the global optimum of the given search space) that is computationally efficient (i.e. it does not require a vast amount of computer memory or running time). Based on these empirical examinations, which have employed realistic parameters drawn from existing consumption and renewable energy data sets, the following conclusions concerning renewable energy can be drawn from this study: • It is feasible to employ ordinary computer resources to obtain on an hourly basis near-optimal energy-sharing coalition structures that will lead to the more effective and economical use of renewable energy. • This energy-sharing approach will contribute to more rapid adoption and proliferation of existing renewable energy equipment and infrastructure. The principal contributions towards these end that this thesis work has made are as follows: • A modelling framework has been set up that can be used for extensive empirical determinations of near-optimal energy-sharing coalition structures. • A detailed empirical study has been carried out that has examined the relative capabilities in this context of various optimal coalition structure search methods, including genetic algorithms (GA), dynamic programming (DP), particle swarm optimization (PSO), population-based incremental learning (PBIL), and several variants to PBIL. • The novel extensions to basic PBIL optimization have included Top-k Merit Weighting PBIL (PBIL-MW), Set-ID Encoding Schemes, and Hierarchical PBIL-MW

    TEDA: A Targeted Estimation of Distribution Algorithm

    Get PDF
    This thesis discusses the development and performance of a novel evolutionary algorithm, the Targeted Estimation of Distribution Algorithm (TEDA). TEDA takes the concept of targeting, an idea that has previously been shown to be effective as part of a Genetic Algorithm (GA) called Fitness Directed Crossover (FDC), and introduces it into a novel hybrid algorithm that transitions from a GA to an Estimation of Distribution Algorithm (EDA). Targeting is a process for solving optimisation problems where there is a concept of control points, genes that can be said to be active, and where the total number of control points found within a solution is as important as where they are located. When generating a new solution an algorithm that uses targeting must first of all choose the number of control points to set in the new solution before choosing which to set. The hybrid approach is designed to take advantage of the ability of EDAs to exploit patterns within the population to effectively locate the global optimum while avoiding the tendency of EDAs to prematurely converge. This is achieved by initially using a GA to effectively explore the search space before transitioning into an EDA as the population converges on the region of the global optimum. As targeting places an extra restriction on the solutions produced by specifying their size, combining it with the hybrid approach allows TEDA to produce solutions that are of an optimal size and of a higher quality than would be found using a GA alone without risking a loss of diversity. TEDA is tested on three different problem domains. These are optimal control of cancer chemotherapy, network routing and Feature Subset Selection (FSS). Of these problems, TEDA showed consistent advantage over standard EAs in the routing problem and demonstrated that it is able to find good solutions faster than untargeted EAs and non evolutionary approaches at the FSS problem. It did not demonstrate any advantage over other approaches when applied to chemotherapy. The FSS domain demonstrated that in large and noisy problems TEDA’s targeting derived ability to reduce the size of the search space significantly increased the speed with which good solutions could be found. The routing domain demonstrated that, where the ideal number of control points is deceptive, both targeting and the exploitative capabilities of an EDA are needed, making TEDA a more effective approach than both untargeted approaches and FDC. Additionally, in none of the problems was TEDA seen to perform significantly worse than any alternative approaches

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field
    corecore