1,104 research outputs found

    Evolutionary algorithms for the chance-constrained knapsack problem

    Get PDF
    Evolutionary algorithms have been widely used for a range of stochastic optimization problems. In most studies, the goal is to optimize the expected quality of the solution. Motivated by real-world problems where constraint violations have extremely disruptive effects, we consider a variant of the knapsack problem where the profit is maximized under the constraint that the knapsack capacity bound is violated with a small probability of at most α. This problem is known as chance-constrained knapsack problem and chance-constrained optimization problems have so far gained little attention in the evolutionary computation literature. We show how to use popular deviation inequalities such as Chebyshev's inequality and Chernoff bounds as part of the solution evaluation when tackling these problems by evolutionary algorithms and compare the effectiveness of our algorithms on a wide range of chance-constrained knapsack instances.Xue Xie, Oscar Harper, Hirad Assimi, Aneta Neumann, Frank Neuman

    Evolutionary bi-objective optimization for the dynamic chance-constrained knapsack problem based on tail bound objectives

    Get PDF
    Real-world combinatorial optimization problems are often stochastic and dynamic. Therefore, it is essential to make optimal and reliable decisions with a holistic approach. In this paper, we consider the dynamic chance-constrained knapsack problem where the weight of each item is stochastic, the capacity constraint changes dynamically over time, and the objective is to maximize the total profit subject to the probability that total weight exceeds the capacity. We make use of prominent tail inequalities such as Chebyshev’s inequality, and Chernoff bound to approximate the probabilistic constraint. Our key contribution is to introduce an additional objective which estimates the minimal capacity bound for a given stochastic solution that still meets the chance constraint. This objective helps to cater for dynamic changes to the stochastic problem. We apply single- and multi-objective evolutionary algorithms to the problem and show how bi-objective optimization can help to deal with dynamic chance-constrained problems.Hirad Assimi, Oscar Harper, Yue Xie, Aneta Neumann and Frank Neuman

    Optimizing Chance-Constrained Submodular Problems with Variable Uncertainties

    Full text link
    Chance constraints are frequently used to limit the probability of constraint violations in real-world optimization problems where the constraints involve stochastic components. We study chance-constrained submodular optimization problems, which capture a wide range of optimization problems with stochastic constraints. Previous studies considered submodular problems with stochastic knapsack constraints in the case where uncertainties are the same for each item that can be selected. However, uncertainty levels are usually variable with respect to the different stochastic components in real-world scenarios, and rigorous analysis for this setting is missing in the context of submodular optimization. This paper provides the first such analysis for this case, where the weights of items have the same expectation but different dispersion. We present greedy algorithms that can obtain a high-quality solution, i.e., a constant approximation ratio to the given optimal solution from the deterministic setting. In the experiments, we demonstrate that the algorithms perform effectively on several chance-constrained instances of the maximum coverage problem and the influence maximization problem

    Analysis of the (1+1) EA on LeadingOnes with Constraints

    Full text link
    Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years. In this paper, we study how evolutionary algorithms optimize constrained versions of the classical LeadingOnes problem. We first provide a run time analysis for the classical (1+1) EA on the LeadingOnes problem with a deterministic cardinality constraint, giving Θ(n(nB)log(B)+n2)\Theta(n (n-B)\log(B) + n^2) as the tight bound. Our results show that the behaviour of the algorithm is highly dependent on the constraint bound of the uniform constraint. Afterwards, we consider the problem in the context of stochastic constraints and provide insights using experimental studies on how the (μ\mu+1) EA is able to deal with these constraints in a sampling-based setting
    corecore