2,532 research outputs found
General Bounds for Incremental Maximization
We propose a theoretical framework to capture incremental solutions to
cardinality constrained maximization problems. The defining characteristic of
our framework is that the cardinality/support of the solution is bounded by a
value that grows over time, and we allow the solution to be
extended one element at a time. We investigate the best-possible competitive
ratio of such an incremental solution, i.e., the worst ratio over all
between the incremental solution after steps and an optimum solution of
cardinality . We define a large class of problems that contains many
important cardinality constrained maximization problems like maximum matching,
knapsack, and packing/covering problems. We provide a general
-competitive incremental algorithm for this class of problems, and show
that no algorithm can have competitive ratio below in general.
In the second part of the paper, we focus on the inherently incremental
greedy algorithm that increases the objective value as much as possible in each
step. This algorithm is known to be -competitive for submodular objective
functions, but it has unbounded competitive ratio for the class of incremental
problems mentioned above. We define a relaxed submodularity condition for the
objective function, capturing problems like maximum (weighted) (-)matching
and a variant of the maximum flow problem. We show that the greedy algorithm
has competitive ratio (exactly) for the class of problems that satisfy
this relaxed submodularity condition.
Note that our upper bounds on the competitive ratios translate to
approximation ratios for the underlying cardinality constrained problems.Comment: fixed typo
Robust and MaxMin Optimization under Matroid and Knapsack Uncertainty Sets
Consider the following problem: given a set system (U,I) and an edge-weighted
graph G = (U, E) on the same universe U, find the set A in I such that the
Steiner tree cost with terminals A is as large as possible: "which set in I is
the most difficult to connect up?" This is an example of a max-min problem:
find the set A in I such that the value of some minimization (covering) problem
is as large as possible.
In this paper, we show that for certain covering problems which admit good
deterministic online algorithms, we can give good algorithms for max-min
optimization when the set system I is given by a p-system or q-knapsacks or
both. This result is similar to results for constrained maximization of
submodular functions. Although many natural covering problems are not even
approximately submodular, we show that one can use properties of the online
algorithm as a surrogate for submodularity.
Moreover, we give stronger connections between max-min optimization and
two-stage robust optimization, and hence give improved algorithms for robust
versions of various covering problems, for cases where the uncertainty sets are
given by p-systems and q-knapsacks.Comment: 17 pages. Preliminary version combining this paper and
http://arxiv.org/abs/0912.1045 appeared in ICALP 201
Solving Assembly Line Balancing Problems by Combining IP and CP
Assembly line balancing problems consist in partitioning the work necessary
to assemble a number of products among different stations of an assembly line.
We present a hybrid approach for solving such problems, which combines
constraint programming and integer programming.Comment: 10 pages, Sixth Annual Workshop of the ERCIM Working Group on
Constraints, Prague, June 200
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
We investigate two new optimization problems -- minimizing a submodular
function subject to a submodular lower bound constraint (submodular cover) and
maximizing a submodular function subject to a submodular upper bound constraint
(submodular knapsack). We are motivated by a number of real-world applications
in machine learning including sensor placement and data subset selection, which
require maximizing a certain submodular function (like coverage or diversity)
while simultaneously minimizing another (like cooperative cost). These problems
are often posed as minimizing the difference between submodular functions [14,
35] which is in the worst case inapproximable. We show, however, that by
phrasing these problems as constrained optimization, which is more natural for
many applications, we achieve a number of bounded approximation guarantees. We
also show that both these problems are closely related and an approximation
algorithm solving one can be used to obtain an approximation guarantee for the
other. We provide hardness results for both problems thus showing that our
approximation factors are tight up to log-factors. Finally, we empirically
demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201
Algorithms for the continuous nonlinear resource allocation problem---new implementations and numerical studies
Patriksson (2008) provided a then up-to-date survey on the
continuous,separable, differentiable and convex resource allocation problem
with a single resource constraint. Since the publication of that paper the
interest in the problem has grown: several new applications have arisen where
the problem at hand constitutes a subproblem, and several new algorithms have
been developed for its efficient solution. This paper therefore serves three
purposes. First, it provides an up-to-date extension of the survey of the
literature of the field, complementing the survey in Patriksson (2008) with
more then 20 books and articles. Second, it contributes improvements of some of
these algorithms, in particular with an improvement of the pegging (that is,
variable fixing) process in the relaxation algorithm, and an improved means to
evaluate subsolutions. Third, it numerically evaluates several relaxation
(primal) and breakpoint (dual) algorithms, incorporating a variety of pegging
strategies, as well as a quasi-Newton method. Our conclusion is that our
modification of the relaxation algorithm performs the best. At least for
problem sizes up to 30 million variables the practical time complexity for the
breakpoint and relaxation algorithms is linear
Algorithm Engineering in Robust Optimization
Robust optimization is a young and emerging field of research having received
a considerable increase of interest over the last decade. In this paper, we
argue that the the algorithm engineering methodology fits very well to the
field of robust optimization and yields a rewarding new perspective on both the
current state of research and open research directions.
To this end we go through the algorithm engineering cycle of design and
analysis of concepts, development and implementation of algorithms, and
theoretical and experimental evaluation. We show that many ideas of algorithm
engineering have already been applied in publications on robust optimization.
Most work on robust optimization is devoted to analysis of the concepts and the
development of algorithms, some papers deal with the evaluation of a particular
concept in case studies, and work on comparison of concepts just starts. What
is still a drawback in many papers on robustness is the missing link to include
the results of the experiments again in the design
The Knapsack Problem with Neighbour Constraints
We study a constrained version of the knapsack problem in which dependencies
between items are given by the adjacencies of a graph. In the 1-neighbour
knapsack problem, an item can be selected only if at least one of its
neighbours is also selected. In the all-neighbours knapsack problem, an item
can be selected only if all its neighbours are also selected. We give
approximation algorithms and hardness results when the nodes have both uniform
and arbitrary weight and profit functions, and when the dependency graph is
directed and undirected.Comment: Full version of IWOCA 2011 pape
- …