82 research outputs found
A Relaxed FPTAS for Chance-Constrained Knapsack
The stochastic knapsack problem is a stochastic version of the well known deterministic knapsack problem, in which some of the input values are random variables. There are several variants of the stochastic problem. In this paper we concentrate on the chance-constrained variant, where item values are deterministic and item sizes are stochastic. The goal is to find a maximum value allocation subject to the constraint that the overflow probability is at most a given value. Previous work showed a PTAS for the problem for various distributions (Poisson, Exponential, Bernoulli and Normal). Some strictly respect the constraint and some relax the constraint by a factor of (1+epsilon). All algorithms use Omega(n^{1/epsilon}) time. A very recent work showed a "almost FPTAS" algorithm for Bernoulli distributions with O(poly(n) * quasipoly(1/epsilon)) time.
In this paper we present a FPTAS for normal distributions with a solution that satisfies the chance constraint in a relaxed sense. The normal distribution is particularly important, because by the Berry-Esseen theorem, an algorithm solving the normal distribution also solves, under mild conditions, arbitrary independent distributions. To the best of our knowledge, this is the first (relaxed or non-relaxed) FPTAS for the problem. In fact, our algorithm runs in poly(n/epsilon) time. We achieve the FPTAS by a delicate combination of previous techniques plus a new alternative solution to the non-heavy elements that is based on a non-convex program with a simple structure and an O(n^2 log {n/epsilon}) running time. We believe this part is also interesting on its own right
Stochastic Combinatorial Optimization via Poisson Approximation
We study several stochastic combinatorial problems, including the expected
utility maximization problem, the stochastic knapsack problem and the
stochastic bin packing problem. A common technical challenge in these problems
is to optimize some function of the sum of a set of random variables. The
difficulty is mainly due to the fact that the probability distribution of the
sum is the convolution of a set of distributions, which is not an easy
objective function to work with. To tackle this difficulty, we introduce the
Poisson approximation technique. The technique is based on the Poisson
approximation theorem discovered by Le Cam, which enables us to approximate the
distribution of the sum of a set of random variables using a compound Poisson
distribution.
We first study the expected utility maximization problem introduced recently
[Li and Despande, FOCS11]. For monotone and Lipschitz utility functions, we
obtain an additive PTAS if there is a multidimensional PTAS for the
multi-objective version of the problem, strictly generalizing the previous
result.
For the stochastic bin packing problem (introduced in [Kleinberg, Rabani and
Tardos, STOC97]), we show there is a polynomial time algorithm which uses at
most the optimal number of bins, if we relax the size of each bin and the
overflow probability by eps.
For stochastic knapsack, we show a 1+eps-approximation using eps extra
capacity, even when the size and reward of each item may be correlated and
cancelations of items are allowed. This generalizes the previous work [Balghat,
Goel and Khanna, SODA11] for the case without correlation and cancelation. Our
algorithm is also simpler. We also present a factor 2+eps approximation
algorithm for stochastic knapsack with cancelations. the current known
approximation factor of 8 [Gupta, Krishnaswamy, Molinaro and Ravi, FOCS11].Comment: 42 pages, 1 figure, Preliminary version appears in the Proceeding of
the 45th ACM Symposium on the Theory of Computing (STOC13
Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems
We study the stochastic versions of a broad class of combinatorial problems
where the weights of the elements in the input dataset are uncertain. The class
of problems that we study includes shortest paths, minimum weight spanning
trees, and minimum weight matchings, and other combinatorial problems like
knapsack. We observe that the expected value is inadequate in capturing
different types of {\em risk-averse} or {\em risk-prone} behaviors, and instead
we consider a more general objective which is to maximize the {\em expected
utility} of the solution for some given utility function, rather than the
expected weight (expected weight becomes a special case). Under the assumption
that there is a pseudopolynomial time algorithm for the {\em exact} version of
the problem (This is true for the problems mentioned above), we can obtain the
following approximation results for several important classes of utility
functions: (1) If the utility function \uti is continuous, upper-bounded by a
constant and \lim_{x\rightarrow+\infty}\uti(x)=0, we show that we can obtain
a polynomial time approximation algorithm with an {\em additive error}
for any constant . (2) If the utility function \uti is
a concave increasing function, we can obtain a polynomial time approximation
scheme (PTAS). (3) If the utility function \uti is increasing and has a
bounded derivative, we can obtain a polynomial time approximation scheme. Our
results recover or generalize several prior results on stochastic shortest
path, stochastic spanning tree, and stochastic knapsack. Our algorithm for
utility maximization makes use of the separability of exponential utility and a
technique to decompose a general utility function into exponential utility
functions, which may be useful in other stochastic optimization problems.Comment: 31 pages, Preliminary version appears in the Proceeding of the 52nd
Annual IEEE Symposium on Foundations of Computer Science (FOCS 2011), This
version contains several new results ( results (2) and (3) in the abstract
Evolutionary algorithms for the chance-constrained knapsack problem
Evolutionary algorithms have been widely used for a range of stochastic optimization problems. In most studies, the goal is to optimize the expected quality of the solution. Motivated by real-world problems where constraint violations have extremely disruptive effects, we consider a variant of the knapsack problem where the profit is maximized under the constraint that the knapsack capacity bound is violated with a small probability of at most Ξ±. This problem is known as chance-constrained knapsack problem and chance-constrained optimization problems have so far gained little attention in the evolutionary computation literature. We show how to use popular deviation inequalities such as Chebyshev's inequality and Chernoff bounds as part of the solution evaluation when tackling these problems by evolutionary algorithms and compare the effectiveness of our algorithms on a wide range of chance-constrained knapsack instances.Xue Xie, Oscar Harper, Hirad Assimi, Aneta Neumann, Frank Neuman
Attention and Sensor Planning in Autonomous Robotic Visual Search
This thesis is concerned with the incorporation of saliency in visual search and the development of sensor planning strategies for visual search. The saliency model is a mixture of two schemes that extracts visual clues regarding the structure of the environment and object specific features. The sensor planning methods, namely Greedy Search with Constraint (GSC), Extended Greedy Search (EGS) and Dynamic Look Ahead Search (DLAS) are approximations to the optimal solution for the problem of object search, as extensions to the work of Yiming Ye.
Experiments were conducted to evaluate the proposed methods. They show that by using saliency in search a performance improvement up to 75% is attainable in terms of number of actions taken to complete the search. As for the planning strategies, the GSC algorithm achieved the highest detection rate and the best efficiency in terms of cost it incurs to explore every percentage of an environment
νλ₯ μ΅λν μ‘°ν©μ΅μ ν λ¬Έμ μ λν κ·Όμ¬ν΄λ²
νμλ
Όλ¬Έ(μμ¬)--μμΈλνκ΅ λνμ :곡과λν μ°μ
곡νκ³Ό,2019. 8. μ΄κ²½μ.In this thesis, we consider a variant of the deterministic combinatorial optimization problem (DCO) where there is uncertainty in the data, the probability maximizing combinatorial optimization problem (PCO). PCO is the problem of maximizing the probability of satisfying the capacity constraint, while guaranteeing the total profit of the selected subset is at least a given value. PCO is closely related to the chance-constrained combinatorial optimization problem (CCO), which is of the form that the objective function and the constraint function of PCO is switched. It search for a subset that maximizes the total profit while guaranteeing the probability of satisfying the capacity constraint is at least a given threshold. Thus, we discuss the relation between the two problems and analyse the complexities of the problems in special cases. In addition, we generate pseudo polynomial time exact algorithms of PCO and CCO that use an exact algorithm of a deterministic constrained combinatorial optimization problem. Further, we propose an approximation scheme of PCO that is fully polynomial time approximation scheme (FPTAS) in some special cases that are NP-hard. An approximation scheme of CCO is also presented which was derived in the process of generating the approximation scheme of PCO.λ³Έ λ
Όλ¬Έμμλ μΌλ°μ μΈ μ‘°ν© μ΅μ ν λ¬Έμ (deterministic combinatorial optimization problem : DCO)μμ λ°μ΄ν°μ λΆνμ€μ±μ΄ μ‘΄μ¬ν λλ₯Ό λ€λ£¨λ λ¬Έμ λ‘, μ΄ μμ΅μ μ£Όμ΄μ§ μμ μ΄μμΌλ‘ 보μ₯νλ©΄μ μ©λ μ μ½μ λ§μ‘±μν¬ νλ₯ μ μ΅λννλ νλ₯ μ΅λν μ‘°ν© μ΅μ ν λ¬Έμ (probability maximizing combinatorial optimization problem : PCO)μ λ€λ£¬λ€. PCOμ λ§€μ° λ°μ ν κ΄κ³κ° μλ λ¬Έμ λ‘, μ΄ μμ΅μ μ΅λννλ©΄μ μ©λ μ μ½μ λ§μ‘±μν¬ νλ₯ μ΄ μΌμ κ° μ΄μμ΄ λλλ‘ λ³΄μ₯νλ νλ₯ μ μ½ μ‘°ν© μ΅μ ν λ¬Έμ (chance-constrained combinatorial optimization problem : CCO)κ° μλ€. μ°λ¦¬λ λ λ¬Έμ μ κ΄κ³μ λνμ¬ λ
Όμνκ³ νΉμ 쑰건 νμμ λ λ¬Έμ μ 볡μ‘λλ₯Ό λΆμνμλ€. λν, μ μ½μμ΄ νλ μΆκ°λ DCOλ₯Ό λ°λ³΅μ μΌλ‘ νμ΄ PCOμ CCOμ μ΅μ ν΄λ₯Ό ꡬνλ μ μ¬ λ€νμκ° μκ³ λ¦¬μ¦μ μ μνμλ€. λ λμκ°, PCOκ° NP-hardμΈ νΉλ³ν μΈμ€ν΄μ€λ€μ λν΄μ μμ λ€νμκ° κ·Όμ¬ν΄λ²(FPTAS)κ° λλ κ·Όμ¬ν΄λ²μ μ μνμλ€. μ΄ κ·Όμ¬ν΄λ²μ μ λνλ κ³Όμ μμ CCOμ κ·Όμ¬ν΄λ² λν κ³ μνμλ€.Chapter 1 Introduction 1
1.1 Problem Description 1
1.2 Literature Review 7
1.3 Research Motivation and Contribution 12
1.4 Organization of the Thesis 13
Chapter 2 Computational Complexity of Probability Maximizing Combinatorial Optimization Problem 15
2.1 Complexity of General Case of PCO and CCO 18
2.2 Complexity of CCO in Special Cases 19
2.3 Complexity of PCO in Special Cases 27
Chapter 3 Exact Algorithms 33
3.1 Exact Algorithm of PCO 34
3.2 Exact Algorithm of CCO 38
Chapter 4 Approximation Scheme for Probability Maximizing Combinatorial Optimization Problem 43
4.1 Bisection Procedure of rho 46
4.2 Approximation Scheme of CCO 51
4.3 Variation of the Bisection Procedure of rho 64
4.4 Comparison to the Approximation Scheme of Nikolova 73
Chapter 5 Conclusion 77
5.1 Concluding Remarks 77
5.2 Future Works 79
Bibliography 81
κ΅λ¬Έμ΄λ‘ 87Maste
- β¦