54 research outputs found

    A tabu search heuristic based on k-diamonds for the weighted feedback vertex set problem

    No full text
    Given an undirected and vertex weighted graph G = (V,E,w), the Weighted Feedback Vertex Problem (WFVP) consists of finding a subset F ⊆ V of vertices of minimum weight such that each cycle in G contains at least one vertex in F. The WFVP on general graphs is known to be NP-hard and to be polynomially solvable on some special classes of graphs (e.g., interval graphs, co-comparability graphs, diamond graphs). In this paper we introduce an extension of diamond graphs, namely the k-diamond graphs, and give a dynamic programming algorithm to solve WFVP in linear time on this class of graphs. Other than solving an open question, this algorithm allows an efficient exploration of a neighborhood structure that can be defined by using such a class of graphs. We used this neighborhood structure inside our Iterated Tabu Search heuristic. Our extensive experimental show the effectiveness of this heuristic in improving the solution provided by a 2-approximate algorithm for the WFVPon general graphs

    Optimising airline maintenance scheduling decisions

    Get PDF
    Airline maintenance scheduling (AMS) studies how plans or schedules are constructed to ensure that a fleet is efficiently maintained and that airline operational demands are met. Additionally, such schedules must take into consideration the different regulations airlines are subject to, while minimising maintenance costs. In this thesis, we study different formulations, solution methods, and modelling considerations, for the AMS and related problems to propose two main contributions. First, we present a new type of multi-objective mixed integer linear programming formulation which challenges traditional time discretisation. Employing the concept of time intervals, we efficiently model the airline maintenance scheduling problem with tail assignment considerations. With a focus on workshop resource allocation and individual aircraft flight operations, and the use of a custom iterative algorithm, we solve large and long-term real-world instances (16000 flights, 529 aircraft, 8 maintenance workshops) in reasonable computational time. Moreover, we provide evidence to suggest, that our framework provides near-optimal solutions, and that inter-airline cooperation is beneficial for workshops. Second, we propose a new hybrid solution procedure to solve the aircraft recovery problem. Here, we study how to re-schedule flights and re-assign aircraft to these, to resume airline operations after an unforeseen disruption. We do so while taking operational restrictions into account. Specifically, restrictions on aircraft, maintenance, crew duty, and passenger delay are accounted for. The flexibility of the approach allows for further operational restrictions to be easily introduced. The hybrid solution procedure involves the combination of column generation with learning-based hyperheuristics. The latter, adaptively selects exact or metaheuristic algorithms to generate columns. The five different algorithms implemented, two of which we developed, were collected and released as a Python package (Torres Sanchez, 2020). Findings suggest that the framework produces fast and insightful recovery solutions

    Parallelised and vectorised ant colony optimization

    Get PDF
    Ant Colony Optimisation (ACO) is a versatile population-based optimisation metaheuristic based on the foraging behaviour of certain species of ant, and is part of the Evolutionary Computation family of algorithms. While ACO generally provides good quality solutions to the problems it is applied to, two key limitations prevent it from being truly viable on large-scale problems: A high memory requirement that grows quadratically with instance size, and high execution time. This thesis presents a parallelised and vectorised implementation of ACO using OpenMP and AVX SIMD instructions; while this alone is enough to improve upon the execution time of the algorithm, this implementation also features an alternative memory structure and a novel candidate set approach, the use of which significantly reduces the memory requirement of ACO. This parallelism is enabled through the use of Max-Min Ant System, an ACO variant that only utilises local memory during the solution process and therefore risks no synchronisation issues, and an adaptation of vRoulette, a vector-compatible variant of the common roulette wheel selection method. Through the use of these techniques ACO is also able to find good quality solutions for the very large Art TSPs, a problem set that has traditionally been unfeasible to solve with ACO due to high memory requirements and execution time. These techniques can also benefit ACO when it comes to solving other problems. In this case the Virtual Machine Placement problem, in which Virtual Machines have to be efficiently allocated to Physical Machines in a cloud environment, is used as a benchmark, with significant improvements to execution time

    A Group Theoretic Tabu Search Methodology for Solving the Theater Distribution Vehicle Routing and Scheduling Problem

    Get PDF
    The application of Group Theory to Tabu Search is a new and exciting field of research. This dissertation applies and extends some of Colletti\u27s (1999) seminal work in group theory and metaheuristics in order to solve the theater distribution vehicle routing and scheduling problem (TDVRSP). This research produced a robust, efficient, effective and flexible generalized theater distribution model that prescribes the routing and scheduling of multi-modal theater transportation assets to provide economically efficient time definite delivery of cargo to customers. In doing so, advances are provided in the field of group theoretic tabu search and its application to difficult combinatorial optimization problems, e.g., the multiple trip multiple services vehicle routing and scheduling problem with hubs and other defining constraints

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    Wireless Sensor Network Clustering with Machine Learning

    Get PDF
    Wireless sensor networks (WSNs) are useful in situations where a low-cost network needs to be set up quickly and no fixed network infrastructure exists. Typical applications are for military exercises and emergency rescue operations. Due to the nature of a wireless network, there is no fixed routing or intrusion detection and these tasks must be done by the individual network nodes. The nodes of a WSN are mobile devices and rely on battery power to function. Due the limited power resources available to the devices and the tasks each node must perform, methods to decrease the overall power consumption of WSN nodes are an active research area. This research investigated using genetic algorithms and graph algorithms to determine a clustering arrangement of wireless nodes that would reduce WSN power consumption and thereby prolong the lifetime of the network. The WSN nodes were partitioned into clusters and a node elected from each cluster to act as a cluster head. The cluster head managed routing tasks for the cluster, thereby reducing the overall WSN power usage. The clustering configuration was determined via genetic algorithm and graph algorithms. The fitness function for the genetic algorithm was based on the energy used by the nodes. It was found that the genetic algorithm was able to cluster the nodes in a near-optimal configuration for energy efficiency. Chromosome repair was also developed and implemented. Two different repair methods were found to be successful in producing near-optimal solutions and reducing the time to reach the solution versus a standard genetic algorithm. It was also found the repair methods were able to implement gateway nodes and energy balance to further reduce network energy consumption

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time

    From metaheuristics to learnheuristics: Applications to logistics, finance, and computing

    Get PDF
    Un gran nombre de processos de presa de decisions en sectors estratègics com el transport i la producció representen problemes NP-difícils. Sovint, aquests processos es caracteritzen per alts nivells d'incertesa i dinamisme. Les metaheurístiques són mètodes populars per a resoldre problemes d'optimització difícils en temps de càlcul raonables. No obstant això, sovint assumeixen que els inputs, les funcions objectiu, i les restriccions són deterministes i conegudes. Aquests constitueixen supòsits forts que obliguen a treballar amb problemes simplificats. Com a conseqüència, les solucions poden conduir a resultats pobres. Les simheurístiques integren la simulació a les metaheurístiques per resoldre problemes estocàstics d'una manera natural. Anàlogament, les learnheurístiques combinen l'estadística amb les metaheurístiques per fer front a problemes en entorns dinàmics, en què els inputs poden dependre de l'estructura de la solució. En aquest context, les principals contribucions d'aquesta tesi són: el disseny de les learnheurístiques, una classificació dels treballs que combinen l'estadística / l'aprenentatge automàtic i les metaheurístiques, i diverses aplicacions en transport, producció, finances i computació.Un gran número de procesos de toma de decisiones en sectores estratégicos como el transporte y la producción representan problemas NP-difíciles. Frecuentemente, estos problemas se caracterizan por altos niveles de incertidumbre y dinamismo. Las metaheurísticas son métodos populares para resolver problemas difíciles de optimización de manera rápida. Sin embargo, suelen asumir que los inputs, las funciones objetivo y las restricciones son deterministas y se conocen de antemano. Estas fuertes suposiciones conducen a trabajar con problemas simplificados. Como consecuencia, las soluciones obtenidas pueden tener un pobre rendimiento. Las simheurísticas integran simulación en metaheurísticas para resolver problemas estocásticos de una manera natural. De manera similar, las learnheurísticas combinan aprendizaje estadístico y metaheurísticas para abordar problemas en entornos dinámicos, donde los inputs pueden depender de la estructura de la solución. En este contexto, las principales aportaciones de esta tesis son: el diseño de las learnheurísticas, una clasificación de trabajos que combinan estadística / aprendizaje automático y metaheurísticas, y varias aplicaciones en transporte, producción, finanzas y computación.A large number of decision-making processes in strategic sectors such as transport and production involve NP-hard problems, which are frequently characterized by high levels of uncertainty and dynamism. Metaheuristics have become the predominant method for solving challenging optimization problems in reasonable computing times. However, they frequently assume that inputs, objective functions and constraints are deterministic and known in advance. These strong assumptions lead to work on oversimplified problems, and the solutions may demonstrate poor performance when implemented. Simheuristics, in turn, integrate simulation into metaheuristics as a way to naturally solve stochastic problems, and, in a similar fashion, learnheuristics combine statistical learning and metaheuristics to tackle problems in dynamic environments, where inputs may depend on the structure of the solution. The main contributions of this thesis include (i) a design for learnheuristics; (ii) a classification of works that hybridize statistical and machine learning and metaheuristics; and (iii) several applications for the fields of transport, production, finance and computing
    corecore