2,039 research outputs found

    Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean Functions

    Full text link
    With this paper, we contribute to the understanding of ant colony optimization (ACO) algorithms by formally analyzing their runtime behavior. We study simple MAX-MIN ant systems on the class of linear pseudo-Boolean functions defined on binary strings of length 'n'. Our investigations point out how the progress according to function values is stored in pheromone. We provide a general upper bound of O((n^3 \log n)/ \rho) for two ACO variants on all linear functions, where (\rho) determines the pheromone update strength. Furthermore, we show improved bounds for two well-known linear pseudo-Boolean functions called OneMax and BinVal and give additional insights using an experimental study.Comment: 19 pages, 2 figure

    Analysis of Evolutionary Algorithms in Dynamic and Stochastic Environments

    Full text link
    Many real-world optimization problems occur in environments that change dynamically or involve stochastic components. Evolutionary algorithms and other bio-inspired algorithms have been widely applied to dynamic and stochastic problems. This survey gives an overview of major theoretical developments in the area of runtime analysis for these problems. We review recent theoretical studies of evolutionary algorithms and ant colony optimization for problems where the objective functions or the constraints change over time. Furthermore, we consider stochastic problems under various noise models and point out some directions for future research.Comment: This book chapter is to appear in the book "Theory of Randomized Search Heuristics in Discrete Search Spaces", which is edited by Benjamin Doerr and Frank Neumann and is scheduled to be published by Springer in 201

    Maximum Persistency in Energy Minimization

    Full text link
    We consider discrete pairwise energy minimization problem (weighted constraint satisfaction, max-sum labeling) and methods that identify a globally optimal partial assignment of variables. When finding a complete optimal assignment is intractable, determining optimal values for a part of variables is an interesting possibility. Existing methods are based on different sufficient conditions. We propose a new sufficient condition for partial optimality which is: (1) verifiable in polynomial time (2) invariant to reparametrization of the problem and permutation of labels and (3) includes many existing sufficient conditions as special cases. We pose the problem of finding the maximum optimal partial assignment identifiable by the new sufficient condition. A polynomial method is proposed which is guaranteed to assign same or larger part of variables than several existing approaches. The core of the method is a specially constructed linear program that identifies persistent assignments in an arbitrary multi-label setting.Comment: Extended technical report for the CVPR 2014 paper. Update: correction to the proof of characterization theore

    Multi-Quality Auto-Tuning by Contract Negotiation

    Get PDF
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    Using swarm intelligence for distributed job scheduling on the grid

    Get PDF
    With the rapid growth of data and computational needs, distributed systems and computational Grids are gaining more and more attention. Grids are playing an important and growing role in today networks. The huge amount of computations a Grid can fulfill in a specific time cannot be done by the best super computers. However, Grid performance can still be improved by making sure all the resources available in the Grid are utilized by a good load balancing algorithm. The purpose of such algorithms is to make sure all nodes are equally involved in Grid computations. This research proposes two new distributed swarm intelligence inspired load balancing algorithms. One is based on ant colony optimization and is called AntZ, the other one is based on particle swarm optimization and is called ParticleZ. Distributed load balancing does not incorporate a single point of failure in the system. In the AntZ algorithm, an ant is invoked in response to submitting a job to the Grid and this ant surfs the network to find the best resource to deliver the job to. In the ParticleZ algorithm, each node plays a role as a particle and moves toward other particles by sharing its workload among them. We will be simulating our proposed approaches using a Grid simulation toolkit (GridSim) dedicated to Grid simulations. The performance of the algorithms will be evaluated using several performance criteria (e.g. makespan and load balancing level). A comparison of our proposed approaches with a classical approach called State Broadcast Algorithm and two random approaches will also be provided. Experimental results show the proposed algorithms (AntZ and ParticleZ) can perform very well in a Grid environment. In particular, the use of particle swarm optimization, which has not been addressed in the literature, can yield better performance results in many scenarios than the ant colony approach

    Running Time Analysis of the (1+1)-EA for Robust Linear Optimization

    Full text link
    Evolutionary algorithms (EAs) have found many successful real-world applications, where the optimization problems are often subject to a wide range of uncertainties. To understand the practical behaviors of EAs theoretically, there are a series of efforts devoted to analyzing the running time of EAs for optimization under uncertainties. Existing studies mainly focus on noisy and dynamic optimization, while another common type of uncertain optimization, i.e., robust optimization, has been rarely touched. In this paper, we analyze the expected running time of the (1+1)-EA solving robust linear optimization problems (i.e., linear problems under robust scenarios) with a cardinality constraint kk. Two common robust scenarios, i.e., deletion-robust and worst-case, are considered. Particularly, we derive tight ranges of the robust parameter dd or budget kk allowing the (1+1)-EA to find an optimal solution in polynomial running time, which disclose the potential of EAs for robust optimization.Comment: 17 pages, 1 tabl

    Unbiased Black-Box Complexities of Jump Functions

    Full text link
    We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is (1/2ε)n(1/2 - \varepsilon)n, that is, only a small constant fraction of the fitness values is visible, then the unbiased black-box complexities for arities 33 and higher are of the same order as those for the simple \textsc{OneMax} function. Even for the extreme jump function, in which all but the two fitness values n/2n/2 and nn are blanked out, polynomial-time mutation-based (i.e., unary unbiased) black-box optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a Θ(n1/2)\Theta(n^{-1/2}) fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased black-box complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.Comment: This paper is based on results presented in the conference versions [GECCO 2011] and [GECCO 2014
    corecore