323 research outputs found
Randomized Strategies for Robust Combinatorial Optimization
In this paper, we study the following robust optimization problem. Given an
independence system and candidate objective functions, we choose an independent
set, and then an adversary chooses one objective function, knowing our choice.
Our goal is to find a randomized strategy (i.e., a probability distribution
over the independent sets) that maximizes the expected objective value. To
solve the problem, we propose two types of schemes for designing approximation
algorithms. One scheme is for the case when objective functions are linear. It
first finds an approximately optimal aggregated strategy and then retrieves a
desired solution with little loss of the objective value. The approximation
ratio depends on a relaxation of an independence system polytope. As
applications, we provide approximation algorithms for a knapsack constraint or
a matroid intersection by developing appropriate relaxations and retrievals.
The other scheme is based on the multiplicative weights update method. A key
technique is to introduce a new concept called -reductions for
objective functions with parameters . We show that our scheme
outputs a nearly -approximate solution if there exists an
-approximation algorithm for a subproblem defined by
-reductions. This improves approximation ratio in previous
results. Using our result, we provide approximation algorithms when the
objective functions are submodular or correspond to the cardinality robustness
for the knapsack problem
Robust randomized matchings
The following game is played on a weighted graph: Alice selects a matching
and Bob selects a number . Alice's payoff is the ratio of the weight of
the heaviest edges of to the maximum weight of a matching of size at
most . If guarantees a payoff of at least then it is called
-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns
a -robust matching, which is best possible.
We show that Alice can improve her payoff to by playing a
randomized strategy. This result extends to a very general class of
independence systems that includes matroid intersection, b-matchings, and
strong 2-exchange systems. It also implies an improved approximation factor for
a stochastic optimization variant known as the maximum priority matching
problem and translates to an asymptotic robustness guarantee for deterministic
matchings, in which Bob can only select numbers larger than a given constant.
Moreover, we give a new LP-based proof of Hassin and Rubinstein's bound
General Bounds for Incremental Maximization
We propose a theoretical framework to capture incremental solutions to
cardinality constrained maximization problems. The defining characteristic of
our framework is that the cardinality/support of the solution is bounded by a
value that grows over time, and we allow the solution to be
extended one element at a time. We investigate the best-possible competitive
ratio of such an incremental solution, i.e., the worst ratio over all
between the incremental solution after steps and an optimum solution of
cardinality . We define a large class of problems that contains many
important cardinality constrained maximization problems like maximum matching,
knapsack, and packing/covering problems. We provide a general
-competitive incremental algorithm for this class of problems, and show
that no algorithm can have competitive ratio below in general.
In the second part of the paper, we focus on the inherently incremental
greedy algorithm that increases the objective value as much as possible in each
step. This algorithm is known to be -competitive for submodular objective
functions, but it has unbounded competitive ratio for the class of incremental
problems mentioned above. We define a relaxed submodularity condition for the
objective function, capturing problems like maximum (weighted) (-)matching
and a variant of the maximum flow problem. We show that the greedy algorithm
has competitive ratio (exactly) for the class of problems that satisfy
this relaxed submodularity condition.
Note that our upper bounds on the competitive ratios translate to
approximation ratios for the underlying cardinality constrained problems.Comment: fixed typo
General Bounds for Incremental Maximization
We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value k in N that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all k between the incremental solution after~ steps and an optimum solution of cardinality k. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general 2.618-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below 2.18 in general.
In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be 1.58-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) (b-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) 2.313 for the class of problems that satisfy this relaxed submodularity condition.
Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Informational Substitutes
We propose definitions of substitutes and complements for pieces of
information ("signals") in the context of a decision or optimization problem,
with game-theoretic and algorithmic applications. In a game-theoretic context,
substitutes capture diminishing marginal value of information to a rational
decision maker. We use the definitions to address the question of how and when
information is aggregated in prediction markets. Substitutes characterize
"best-possible" equilibria with immediate information aggregation, while
complements characterize "worst-possible", delayed aggregation. Game-theoretic
applications also include settings such as crowdsourcing contests and Q\&A
forums. In an algorithmic context, where substitutes capture diminishing
marginal improvement of information to an optimization problem, substitutes
imply efficient approximation algorithms for a very general class of (adaptive)
information acquisition problems.
In tandem with these broad applications, we examine the structure and design
of informational substitutes and complements. They have equivalent, intuitive
definitions from disparate perspectives: submodularity, geometry, and
information theory. We also consider the design of scoring rules or
optimization problems so as to encourage substitutability or complementarity,
with positive and negative results. Taken as a whole, the results give some
evidence that, in parallel with substitutable items, informational substitutes
play a natural conceptual and formal role in game theory and algorithms.Comment: Full version of FOCS 2016 paper. Single-column, 61 pages (48 main
text, 13 references and appendix
- …