12 research outputs found

    A biased random-key genetic algorithm with forward-backward improvement for the resource constrained project scheduling problem

    Get PDF
    This paper presents a biased random-key genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. Active schedules are constructed using a priority-rule heuristic in which the priorities of the activities are defined by the genetic algorithm. A forward-backward improvement procedure is applied to all solutions. The chromosomes supplied by the genetic algorithm are adjusted to reflect the solutions obtained by the improvement procedure. The heuristic is tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm

    A random key based genetic algorithm for the resource constrained project scheduling problem

    Get PDF
    This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm

    A biased random-key genetic algorithm for the capacitated minimum spanning tree problem

    Get PDF
    This paper focuses on the capacitated minimum spanning tree(CMST)problem.Given a central processor and a set of remote terminals with specified demands for traffic that must flow between the central processor and terminals,the goal is to design a minimum cost network to carry this demand. Potential links exist between any pair of terminals and between the central processor and the terminals. Each potential link can be included in the design at a given cost.The CMST problem is to design a minimum-cost network connecting the terminals with the central processor so that the flow on any arc of the network is at most Q. A biased random-keygenetic algorithm(BRKGA)is a metaheuristic for combinatorial optimization which evolves a population of random vectors that encode solutions to the combinatorial optimization problem.This paper explores several solution encodings as well as different strategies for some steps of the algorithm and finally proposes a BRKGA heuristic for the CMST problem. Computational experiments are presented showing the effectivenes sof the approach:Seven newbest- known solutions are presented for the set of benchmark instances used in the experiments.Peer ReviewedPostprint (author’s final draft

    インターネットプロトコルネットワークにおけるリンク故障に対するリンク重み最適化モデル

    Get PDF
     As Internet traffic grows with little revenue for service providers, keeping the same service level agreements (SLAs) with limited capital expenditures (CAPEX) is challenging. Major internet service providers backbone traffic grows exponentially while revenue grows logarithmically. Under such situation, CAPEX reduction and an improvement of the infrastructure utilization efficiency are both needed. Link failures are common in Internet Protocol (IP) backbone networks and are an impediment to meeting required quality of service (QoS). After failure occurs, affected traffic is rerouted to adjacent links. This increase in network congestion leads to a reduction of addable traffic and sometimes an increase in packet drop rate. In this thesis network congestion refers to the highest link utilization over all the links in the network. An increase of network congestion may disrupt services with critical SLAs as allowable traffic becomes restricted and packet drop rate increases. Therefore, from a network operator point of view keeping a manageable congestion even under failure is desired A possible approach to deal with congestion increase is to augment link capacity until meeting the manageable congestion threshold. However CAPEX reduction is required. Therefore a minimization of the additional capacity is necessary. In IP networks where OSPF is widely used as a routing protocol, traffic paths are determined by link weights which are preconfigured in advance. Since traffic paths are decided by link weights, links weights therefore decide the links that will get congested. As result they determine the network congestion. Link weights can be optimized in order to minimize the additional capacity under the worst case failure. The worst case failure is the link failure case which generates the highest congestion in the network. In the basic model of link weight optimization, a preventive start-time optimization (PSO) scheme that determines a link weight set to minimize the worst congestion under any single-link failure was presented. Unfortunately, when there is no link failure, that link weight set leads to a congestion that may be higher than the manageable congestion. This is a penalty that will be carried on and thus become a burden especially in networks with few failures. The first part of this thesis proposes a penalty-aware (PA) model that determines a link weight set which reduces that penalty while also reducing the worst congestion by considering both failure and non-failure scenarios. In our PA model we present two simple and effective schemes: preventive start-time optimization without penalty (PSO-NP) and strengthen preventive start-time optimization (S-PSO). PSO-NP suppresses the penalty for the no failure case while reducing the worst congestion under failure, S-PSO minimizes the worst congestion under failure and tries to minimize the penalty compared to PSO for the no failure case. Simulation results show that in several networks, PSO-NP and S-PSO achieve substantial penalty reduction while showing a congestion closed to that of PSO under worst case failure. Despite these facts, PSO-NP and S-PSO do not guarantee an improvement of both the penalty and the worst congestion at the same time as they focus on fixed optimization conditions which restrict the emergence of upgraded solutions for that purpose. A relaxation of these fixed conditions may give us sub-optimal link weight sets that reduce the worst congestion under failure to nearly match that of PSO with a controlled penalty for the no failure case. To determine these sub-optimal sets we expand the penalty-aware model of link weight optimization. We design a scheme where the network operator can set a manageable penalty and find the link weight set that reduces most the worst congestion while maintaining the penalty. This enable network operators to choose more flexible link weight sets accordingly to their requirements under failure and non-failure scenarios. Since setting the penalty to zero would give the same results as PSO-NP, and not setting any penalty condition would give S-PSO, this scheme covers PSO-NP and S-PSO. For this reason we denote it: general preventive start-time optimization (GPSO). Simulation results show that GPSO determines link weight sets with worst congestion reduction equivalent to that of PSO under reduced penalty for the no failure case. GPSO is effective in finding a link weight set that reduces the congestion under both failure and non-failure cases. However it does not guarantee the manageable congestion as it considers penalty. In the second part of this thesis we propose a link-duplication (LD) model that aims to suppress link failure in the first place in order to always meet the manageable congestion. For this purpose we consider the duplication or reinforcement of links which is broadly used to make network reliable. Link duplication provides fast recovery as only switching from the failed link to the backup link will hide the failure at upper layers. However, due to capital expenditure constraints, every link cannot be duplicated. Giving priority to some selected links makes sense. As mentioned above, traffic routes are determined by link weights that are configured in advance. Therefore, choosing an appropriate set of link weights may reduce the number of links that actually need to be duplicated in order to keep a manageable congestion under any single-link failure scenario. Now, PSO also determines the link failure which creates the worst congestion after failure. Since by duplicating this link we can assume it no more fails, PSO can be used to find the smallest number of links to protect so as to guarantee a manageable congestion under any single link failure. The LD model considers multiple protection scenarios before optimizing link weights for the reduction of the overall number of protected links with the congestion of keeping the congestion below the manageable threshold. Simulation results show the LD model delivers a link weight set that requires few link protections to keep the manageable congestion under any single-link failure scenario at the cost of a computation time order L times that of PSO. L represents the number of links in the network. Since the LD model considers additional resources, a fair comparison with the PA model would require considering additional capacity in the PA mode as well. In the third part of this thesis we incorporate additional capacity in the PA model. For the PA model we introduce a mathematical formulation that aims to determine the minimal additional capacity to provide in order to maintain the manageable congestion under any single-link failure scenario. We then compare the LD model to the PA model that incorporates additional capacity features. Evaluation results show that the performance difference between the LD model and the PA model in terms of the required additional capacity depends on the network characteristics. The requirements of latency and continuity for traffic and geographical restriction of services should be taken into consideration when deciding which model to use.電気通信大学201

    Optimization of electric power distribution network configurations with distributed energy sources

    Get PDF
    Orientador: Christiano Lyra FilhoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Uma forma atraente de reduzir perdas em redes de distribuição de energia elétrica é através da abordagem do problema de reconfigurações de redes, o qual deve fornecer uma topologia para operação que minimize as perdas totais causadas pelas resistências elétricas nas linhas e equipamentos (perdas técnicas). A existência de geração distribuída e inovações adicionais associadas ao termo ''smart grids'' permitem aumentar os benefícios de encontrar melhores topologias de redes. Por outro lado, a presença de geração distribuída com injeções aleatórias requer uma nova perspectiva para modelagem do problema de reconfigurações das redes e o desenvolvimento de técnicas de solução apropriadas. Essas questões são o objeto deste trabalho. As principais características do problema de reconfiguração de redes com fontes distribuídas de valores aleatórios são inicialmente exploradas em uma rede maquete, desenvolvida para realçar as consequências da presença dessas fontes. A partir do estudo inicial com a rede maquete, o trabalho propõe um modelo para o problema de otimização da reconfiguração de redes que considera explicitamente injeções de energia com valores aleatórios. Um algoritmo genético construído com arquitetura de algoritmos genéticos baseados em chaves aleatórias direcionadas (BRKGA, acrônimo da descrição em inglês ''biased random-key genetic algorithm'') é desenvolvido para resolver este problema difícil de otimização combinatória. Estudos de casos com redes de referência colocam em perspectiva a metodologia proposta. Os resultados mostram que as características aleatórias das fontes devem ser explicitamente modeladas nas abordagens dos problemas reconfigurações de redes. O trabalho fornece as bases para abordar este novo problema e aponta para caminhos de pesquisas adicionais na áreaAbstract: An attractive way to reduce losses in electric power distribution networks is addressing the network reconfiguration problem, which should give a topology for the primary distribution network that minimizes the total losses due to the electrical resistances in the lines and complementary equipment (technical losses). Distributed energy resources and additional innovations associated to ''smart grids'' allow enhancing the benefits of finding better network topologies. On the other hand, the integration of renewable energy sources with variable random outputs requires expanding the perspective in modeling the network reconfiguration problem and in the shaping of appropriate solution techniques. These issues are the object of this work. The main new features of the problem are explored with a maquette designed to highlight the consequences of random generation sources in the networks. Following, the work proposes a formulation for the problem that explicitly considers random energy sources. A state of the art genetic algorithm built under the biased random-key evolution framework (BRKGA) is developed to address this hard combinatorial optimization problem. Case studies with benchmark networks put into perspective the proposed methodology. Results show that random energy inputs should be explicitly modeled in contemporary approaches to the network reconfiguration problem. The work provides the grounds for addressing this new network reconfiguration problem and points to additional research paths in the areaMestradoAutomaçãoMestra em Engenharia Elétrica1569620CAPE
    corecore