22,042 research outputs found

    On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    Get PDF
    Evolutionary algorithms have been frequently used for dynamic optimization problems. With this paper, we contribute to the theoretical understanding of this research area. We present the first computational complexity analysis of evolutionary algorithms for a dynamic variant of a classical combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201

    Parameterized Complexity Analysis of Randomized Search Heuristics

    Full text link
    This chapter compiles a number of results that apply the theory of parameterized algorithmics to the running-time analysis of randomized search heuristics such as evolutionary algorithms. The parameterized approach articulates the running time of algorithms solving combinatorial problems in finer detail than traditional approaches from classical complexity theory. We outline the main results and proof techniques for a collection of randomized search heuristics tasked to solve NP-hard combinatorial optimization problems such as finding a minimum vertex cover in a graph, finding a maximum leaf spanning tree in a graph, and the traveling salesperson problem.Comment: This is a preliminary version of a chapter in the book "Theory of Evolutionary Computation: Recent Developments in Discrete Optimization", edited by Benjamin Doerr and Frank Neumann, published by Springe

    Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    Get PDF
    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to their theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed time budget. We follow this approach and present a fixed budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1 + 1) EA and (1 + λ) EA algorithms for the TSP in a smoothed complexity setting and derive the lower bounds of the expected fitness gain for a specified number of generations

    Online Disjoint Set Cover Without Prior Knowledge

    Get PDF
    The disjoint set cover (DSC) problem is a fundamental combinatorial optimization problem concerned with partitioning the (hyper)edges of a hypergraph into (pairwise disjoint) clusters so that the number of clusters that cover all nodes is maximized. In its online version, the edges arrive one-by-one and should be assigned to clusters in an irrevocable fashion without knowing the future edges. This paper investigates the competitiveness of online DSC algorithms. Specifically, we develop the first (randomized) online DSC algorithm that guarantees a poly-logarithmic (O(log^{2} n)) competitive ratio without prior knowledge of the hypergraph\u27s minimum degree. On the negative side, we prove that the competitive ratio of any randomized online DSC algorithm must be at least Omega((log n)/(log log n)) (even if the online algorithm does know the minimum degree in advance), thus establishing the first lower bound on the competitive ratio of randomized online DSC algorithms

    Combinatorial optimization and the analysis of randomized search heuristics

    Get PDF
    Randomized search heuristics have widely been applied to complex engineering problems as well as to problems from combinatorial optimization. We investigate the runtime behavior of randomized search heuristics and present runtime bounds for these heuristics on some well-known combinatorial optimization problems. Such analyses can help to understand better the working principle of these algorithms on combinatorial optimization problems as well as help to design better algorithms for a newly given problem. Our analyses mainly consider evolutionary algorithms that have achieved good results on a wide class of NP-hard combinatorial optimization problems. We start by analyzing some easy single-objective optimization problems such as the minimum spanning tree problem or the problem of computing an Eulerian cycle of a given Eulerian graph and prove bounds on the runtime of simple evolutionary algorithms. For the minimum spanning tree problem we also investigate a multi-objective model and show that randomized search heuristics find minimum spanning trees easier in this model than in a single-objective one. Many polynomial solvable problems become NP-hard when a second objective has to be optimized at the same time. We show that evolutionary algorithms are able to compute good approximations for such problems by examining the NP-hard multi-objective minimum spanning tree problem. Another kind of randomized search heuristic is ant colony optimization. Up to now no runtime bounds have been achieved for this kind of heuristic. We investigate a simple ant colony optimization algorithm and present a first runtime analysis. At the end we turn to classical approximation algorithms. Motivated by our investigations of randomized search heurisitics for the minimum spanning tree problem, we present a multi-objective model for NP-hard spanning tree problems and show that the model can help to speed up approximation algorithms for this kind of problems

    Deterministic parallel algorithms for bilinear objective functions

    Full text link
    Many randomized algorithms can be derandomized efficiently using either the method of conditional expectations or probability spaces with low independence. A series of papers, beginning with work by Luby (1988), showed that in many cases these techniques can be combined to give deterministic parallel (NC) algorithms for a variety of combinatorial optimization problems, with low time- and processor-complexity. We extend and generalize a technique of Luby for efficiently handling bilinear objective functions. One noteworthy application is an NC algorithm for maximal independent set. On a graph GG with mm edges and nn vertices, this takes O~(log2n)\tilde O(\log^2 n) time and (m+n)no(1)(m + n) n^{o(1)} processors, nearly matching the best randomized parallel algorithms. Other applications include reduced processor counts for algorithms of Berger (1997) for maximum acyclic subgraph and Gale-Berlekamp switching games. This bilinear factorization also gives better algorithms for problems involving discrepancy. An important application of this is to automata-fooling probability spaces, which are the basis of a notable derandomization technique of Sivakumar (2002). Our method leads to large reduction in processor complexity for a number of derandomization algorithms based on automata-fooling, including set discrepancy and the Johnson-Lindenstrauss Lemma

    Playing Stackelberg Opinion Optimization with Randomized Algorithms for Combinatorial Strategies

    Full text link
    From a perspective of designing or engineering for opinion formation games in social networks, the "opinion maximization (or minimization)" problem has been studied mainly for designing subset selecting algorithms. We furthermore define a two-player zero-sum Stackelberg game of competitive opinion optimization by letting the player under study as the first-mover minimize the sum of expressed opinions by doing so-called "internal opinion design", knowing that the other adversarial player as the follower is to maximize the same objective by also conducting her own internal opinion design. We propose for the min player to play the "follow-the-perturbed-leader" algorithm in such Stackelberg game, obtaining losses depending on the other adversarial player's play. Since our strategy of subset selection is combinatorial in nature, the probabilities in a distribution over all the strategies would be too many to be enumerated one by one. Thus, we design a randomized algorithm to produce a (randomized) pure strategy. We show that the strategy output by the randomized algorithm for the min player is essentially an approximate equilibrium strategy against the other adversarial player

    Multi-rendezvous Spacecraft Trajectory Optimization with Beam P-ACO

    Full text link
    The design of spacecraft trajectories for missions visiting multiple celestial bodies is here framed as a multi-objective bilevel optimization problem. A comparative study is performed to assess the performance of different Beam Search algorithms at tackling the combinatorial problem of finding the ideal sequence of bodies. Special focus is placed on the development of a new hybridization between Beam Search and the Population-based Ant Colony Optimization algorithm. An experimental evaluation shows all algorithms achieving exceptional performance on a hard benchmark problem. It is found that a properly tuned deterministic Beam Search always outperforms the remaining variants. Beam P-ACO, however, demonstrates lower parameter sensitivity, while offering superior worst-case performance. Being an anytime algorithm, it is then found to be the preferable choice for certain practical applications.Comment: Code available at https://github.com/lfsimoes/beam_paco__gtoc
    corecore