10 research outputs found

    An evolutionary algorithm for large scale set covering problems with application to airline crew scheduling

    No full text
    Contains fulltext : 84512.pdf (publisher's version ) (Closed access)Workshop on Real-World Applications of Evolutionary Computing, Edinburgh, Scotland, UK, 17 april 200

    Efficient Solution of Minimum Cost Flow Problems for Large-scale Transportation Networks

    Get PDF
    With the rapid advance of information technology in the transportation industry, of which intermodal transportation is one of the most important subfields, the scale and dimension of problem sizes and datasets is rising significantly. This trend raises the need for study on improving the efficiency, profitability and level of competitiveness of intermodal transportation networks while exploiting the rich information of big data related to these networks. Therefore, this dissertation aims to investigate intermodal transportation network design problems, especially practical optimization problems, and to develop more realistic and effective models and solution approaches that will assist network operators and/or decision makers of the intermodal transportation system. This dissertation focuses on developing a novel strategy for solving the Minimum Cost Flow (MCF) problem for large-scale network design problems by adopting a divide-and-conquer policy during the optimization process. The main contribution is the development of an agglomerative clustering based tiling strategy to significantly reduce the computational and peak memory consumption of the MCF model for large-scale networks. The tiling strategy is supported by the regional-division theorem and -approximation regional-division theorem that are proposed and proved in this dissertation. The region-division theorem is a sufficient condition to exactly guarantee the consistency between the local MCF solution of each sub-network obtained by the aforementioned tiling strategy and the global MCF solution of the whole network. Furthermore, the -approximation region-division theorem provides worst-case bounds, so that the practical approximation MCF solution closely approximates the optimal solution in terms of its optimal value. A series of experiments are performed to evaluate the utility of the proposed approach of solving the large-scale MCF problem. The results indicate that the proposed approach is beneficial to save the execution time and peak memory consumption in large-scale MCF problems under different circumstances

    Heuristic solution approaches for the covering tour problem

    No full text
    Diese Arbeit beschäftigt sich mit dem Covering Tour Problem (CTP) und verschiedenen heuristischen Lösungsmethoden. Dieses Problem der Tourenplanung zählt zu den kombinatorischen Optimierungsproblemen, welche sehr oft im Bereich der Distributionslogistik international agierender Großunternehmen auftreten und durch deren Lösung man entsprechend Kosten einsparen und Gewinne maximieren kann. Im Zuge der Globalisierung der Weltwirtschaft rückt das Problem der Distributionskosten immer mehr in den Mittelpunkt. Das CTP kann auf einem ungerichteten Graphen G=(V U W, E)definiert werden. V U W ist eine Menge von Knoten. V sind jene Knoten, die von der zu konstruierenden Tour besucht werden können. T ist eine Teilmenge von V und beinhaltet jene Knoten, die von der Tour besucht werden müssen. W ist die Menge jener Knoten, welche von der Tour abgedeckt werden müssen, also in einer vorgegebenen Entfernung zur Tour liegen müssen. Das Kantenset E beinhaltet die Verbindungen zwischen sämtlichen Knoten. Ziel ist es nun, eine möglichst kurze Tour zu finden, die im Punkt v0 beginnt, alle Knoten aus T besucht, sämtliche Knoten aus W abdeckt und wieder in v0 endet. Um das Problem zu lösen, wurde das CTP gemäß einer bereits angewandten Methode in zwei Subprobleme, nämlich das Traveling Salesman Problem (TSP) und das Set Covering Problem (SCP) unterteilt und diese wurden vorgestellt. Nach einer kurzen Einführung der Ant Colony Optimierung wurden die Algorithmen GENI, GENIUS und GENI Ant Colony System für den TSP Teil und PRIMAL1 sowie ein Set Covering Ant Colony System für den SCP Teil detailliert beschrieben. In weitere Folge wurde erklärt, wie man die Algorithmen kombinieren kann, um das CTP zu lösen. Sämtliche Algorithmen wurden mit Hilfe der Programmiersprache C++ simuliert und getestet. Zunächst wurden die Algorithmen an Instanzen einer Datenbank getestet und mit bereits vorhandenen Lösungen verglichen, um ihre Funktionalität und Konkurrenzfähigkeit zu überprüfen. Da für das CTP keine Vergleichsinstanzen vorhanden sind, wurden stochastische Probleme entworfen und mit dem H-1-CTP Algorithmus und der von mir entworfenen Metaheuristik Covering Tour Ant Colony System bestehend aus GENI Ant Colony System und Set Covering Ant Colony System gelöst und die Ergebnisse verglichen, um dann die beiden Lösungsansätze zu bewerten.This thesis deals with the Covering Tour Problem (CTP) and different heuristic solution approaches. It can be classified as a combinatorial optimization problem. Logistics and distribution departments of economic global players have to handle this sort of problems to reduce costs and maximize profit. Distribution costs enjoy increasing importance due to the globalization of world economy. The CTP is defined on a complete undirected graph G=(V U W, E) with a set of vertices V U W where V is a set of vertices that can be visited, W defines the set of vertices that have to be covered by the tour and E is the set of edges. “Covered by the tour” means that any vertex v has to lie within a predefined distance of a vertex on the tour. The set V includes the subset T which holds the vertices that have to be visited by the tour. The solution to the CTP is a minimum length tour. The tour starts and ends at the depot and is defined by a certain subset so that all vertices that have to be visited are visited by the tour and all vertices that have to be covered lie within a predetermined distance of a vertex belonging to the tour. In order to solve the problem, it was classified as a combination of the Traveling Salesman Problem (TSP) and the Set Covering Problem (SCP) and the components were introduced. After a short description of Ant Colony Optimization, algorithms GENI, GENIUS and GENI Ant Colony System for the TSP part and PRIMAL1 as well as Set Covering Ant Colony System for the SCP part were introduced in detail. Then the combinations of these algorithms for solving the CTP were described. All algorithms were simulated and tested with the help of C++ programming language. First, algorithms were tested individually on instances from data libraries to ensure their functionality and competitiveness. Then stochastic instances were developed for the CTP because no comparable benchmarks exist and the H-1-CTP algorithm as well as the Covering Tour Ant Colony System, that I created myself, were run on these instances and results were compared

    Hybrid Genetic Relational Search for Inductive Learning

    Get PDF
    An important characteristic of all natural systems is the ability to acquire knowledge through experience and to adapt to new situations. Learning is the single unifying theme of all natural systems. One of the basic ways of gaining knowledge is through examples of some concepts.For instance, we may learn how to distinguish a dog from other creatures after that we have seen a number of creatures, and after that someone (a teacher, or supervisor) told us which creatures are dogs and which are not. This way of learning is called supervised learning. Inductive Concept Learning (ICL) constitutes a central topic in machine learning. The problem can be formulated in the following manner: given a description language used to express possible hypotheses, a background knowledge, a set of positive examples, and a set of negative examples, one has to find a hypothesis which covers all positive examples and none of the negative ones. This is a supervised way of learning, since a supervisor has already classified the examples of the concept into positive and negative examples. The so learned concept can be used to classify previously unseen examples. In general deriving general conclusions from specific observation is called induction. Thus in ICL, concepts are induced because obtained from the observation of a limited set of training examples. The process can be seen as a search process. Starting from an initial hypothesis, what is done is searching the space of the possible hypotheses for one that fits the given set of examples. A representation language has to be chosen in order to represent concepts, examples and the background knowledge. This is an important choice, because this may limit the kind of concept we can learn. With a representation language that has a low expressive power we may not be able to represent some problem domain, because too complex for the language adopted. On the other side, a too expressive language may give us the possibility to represent all problem domains. However this solution may also give us too much freedom, in the sense that we can build concepts in too many different ways, and this could lead to the impossibility of finding the right concept. We are interested in learning concepts expressed in a fragment of first--order logic (FOL). This subject is known as Inductive Logic Programming (ILP), where the knowledge to be learn is expressed by Horn clauses, which are used in programming languages based on logic programming like Prolog. Learning systems that use a representation based on first--order logic have been successfully applied to relevant real life problems, e.g., learning a specific property related to carcinogenicity. Learning first--order hypotheses is a hard task, due to the huge search space one has to deal with. The approach used by the majority of ILP systems tries to overcome this problem by using specific search strategies, like the top-down and the inverse resolution mechanism. However, the greedy selection strategies adopted for reducing the computational effort, render techniques based on this approach often incapable of escaping from local optima. An alternative approach is offered by genetic algorithms (GAs). GAs have proved to be successful in solving comparatively hard optimization problems, as well as problems like ICL. GAs represents a good approach when the problems to solve are characterized by a high number of variables, when there is interaction among variables, when there are mixed types of variables, e.g., numerical and nominal, and when the search space presents many local optima. Moreover it is easy to hybridize GAs with other techniques that are known to be good for solving some classes of problems. Another appealing feature of GAs is represented by their intrinsic parallelism, and their use of exploration operators, which give them the possibility of escaping from local optima. However this latter characteristic of GAs is also responsible for their rather poor performance on learning tasks which are easy to tackle by algorithms that use specific search strategies. These observations suggest that the two approaches above described, i.e., standard ILP strategies and GAs, are applicable to partly complementary classes of learning problems. More important, they indicate that a system incorporating features from both approaches could profit from the different benefits of the approaches. This motivates the aim of this thesis, which is to develop a system based on GAs for ILP that incorporates search strategies used in successful ILP systems. Our approach is inspired by memetic algorithms, a population based search method for combinatorial optimization problems. In evolutionary computation memetic algorithms are GAs in which individuals can be refined during their lifetime.Eiben, A.E. [Promotor]Marchiori, E. [Copromotor

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework

    Das Partial Set Covering Problem und Erweiterungen: Modellierung und Lösungsverfahren

    Get PDF
    In this thesis, we study the Partial Set Covering Problem (PSCP) as well as some new extensions of the PSCP. We present a new extension of the PSCP which is called the Multiple Coverage Partial Set Covering Problem (MCPSCP). The model combines the aspect of multiple coverage with the PSCP. Heuristic and approximative algorithms are proposed. Here, the focus lies on the PSCP and the MCPSCP for which several local search and Langrangean-based algorithms are presented. The heuristics are tested on a wide variety of benchmark problems. Furthermore, we report about an application of the PSCP and the MCPSCP in railway networks. The models are used to find optimal positions for vehicle testing stations
    corecore