22 research outputs found
Beyond Chance-Constrained Convex Mixed-Integer Optimization: A Generalized Calafiore-Campi Algorithm and the notion of -optimization
The scenario approach developed by Calafiore and Campi to attack
chance-constrained convex programs utilizes random sampling on the uncertainty
parameter to substitute the original problem with a representative continuous
convex optimization with convex constraints which is a relaxation of the
original. Calafiore and Campi provided an explicit estimate on the size of
the sampling relaxation to yield high-likelihood feasible solutions of the
chance-constrained problem. They measured the probability of the original
constraints to be violated by the random optimal solution from the relaxation
of size .
This paper has two main contributions. First, we present a generalization of
the Calafiore-Campi results to both integer and mixed-integer variables. In
fact, we demonstrate that their sampling estimates work naturally for variables
restricted to some subset of . The key elements are
generalizations of Helly's theorem where the convex sets are required to
intersect . The size of samples in both algorithms will
be directly determined by the -Helly numbers.
Motivated by the first half of the paper, for any subset , we introduce the notion of an -optimization problem, where the
variables take on values over . It generalizes continuous, integer, and
mixed-integer optimization. We illustrate with examples the expressive power of
-optimization to capture sophisticated combinatorial optimization problems
with difficult modular constraints. We reinforce the evidence that
-optimization is "the right concept" by showing that the well-known
randomized sampling algorithm of K. Clarkson for low-dimensional convex
optimization problems can be extended to work with variables taking values over
.Comment: 16 pages, 0 figures. This paper has been revised and split into two
parts. This version is the second part of the original paper. The first part
of the original paper is arXiv:1508.02380 (the original article contained 24
pages, 3 figures
Violator Spaces: Structure and Algorithms
Sharir and Welzl introduced an abstract framework for optimization problems,
called LP-type problems or also generalized linear programming problems, which
proved useful in algorithm design. We define a new, and as we believe, simpler
and more natural framework: violator spaces, which constitute a proper
generalization of LP-type problems. We show that Clarkson's randomized
algorithms for low-dimensional linear programming work in the context of
violator spaces. For example, in this way we obtain the fastest known algorithm
for the P-matrix generalized linear complementarity problem with a constant
number of blocks. We also give two new characterizations of LP-type problems:
they are equivalent to acyclic violator spaces, as well as to concrete LP-type
problems (informally, the constraints in a concrete LP-type problem are subsets
of a linearly ordered ground set, and the value of a set of constraints is the
minimum of its intersection).Comment: 28 pages, 5 figures, extended abstract was presented at ESA 2006;
author spelling fixe
Discrete Geometry
The workshop on Discrete Geometry was attended by 53 participants, many of them young researchers. In 13 survey talks an overview of recent developments in Discrete Geometry was given. These talks were supplemented by 16 shorter talks in the afternoon, an open problem session and two special sessions. Mathematics Subject Classification (2000): 52Cxx. Abstract regular polytopes: recent developments. (Peter McMullen) Counting crossing-free configurations in the plane. (Micha Sharir) Geometry in additive combinatorics. (JoÌzsef Solymosi) Rigid components: geometric problems, combinatorial solutions. (Ileana Streinu) âą Forbidden patterns. (JaÌnos Pach) âą Projected polytopes, Gale diagrams, and polyhedral surfaces. (GuÌnter M. Ziegler) âą What is known about unit cubes? (Chuanming Zong) There were 16 shorter talks in the afternoon, an open problem session chaired by JesuÌs De Loera, and two special sessions: on geometric transversal theory (organized by Eli Goodman) and on a new release of the geometric software Cinderella (JuÌrgen Richter-Gebert). On the one hand, the contributions witnessed the progress the field provided in recent years, on the other hand, they also showed how many basic (and seemingly simple) questions are still far from being resolved. The program left enough time to use the stimulating atmosphere of the Oberwolfach facilities for fruitful interaction between the participants
Proceedings of the 8th Cologne-Twente Workshop on Graphs and Combinatorial Optimization
International audienceThe Cologne-Twente Workshop (CTW) on Graphs and Combinatorial Optimization started off as a series of workshops organized bi-annually by either Köln University or Twente University. As its importance grew over time, it re-centered its geographical focus by including northern Italy (CTW04 in Menaggio, on the lake Como and CTW08 in Gargnano, on the Garda lake). This year, CTW (in its eighth edition) will be staged in France for the first time: more precisely in the heart of Paris, at the Conservatoire National dâArts et MĂ©tiers (CNAM), between 2nd and 4th June 2009, by a mixed organizing committee with members from LIX, Ecole Polytechnique and CEDRIC, CNAM
LIPIcs, Volume 258, SoCG 2023, Complete Volume
LIPIcs, Volume 258, SoCG 2023, Complete Volum
Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs
Parity games form an intriguing family of infinitary payoff games whose solution
is equivalent to the solution of important problems in automatic verification and
automata theory. They also form a very natural subclass of mean and discounted
payoff games, which in turn are very natural subclasses of turn-based stochastic
payoff games. From a theoretical point of view, solving these games is one of the few
problems that belong to the complexity class NP intersect coNP, and even more interestingly,
solving has been shown to belong to UP intersect coUP, and also to PLS. It is a major open
problem whether these game families can be solved in deterministic polynomial
time.
Policy iteration is one of the most important algorithmic schemes for solving
infinitary payoff games. It is parameterized by an improvement rule that determines
how to proceed in the iteration from one policy to the next. It is a major open problem
whether there is an improvement rule that results in a polynomial time algorithm for
solving one of the considered game classes.
Linear programming is one of the most important computational problems studied
by researchers in computer science, mathematics and operations research. Perhaps
more articles and books are written about linear programming than on all other
computational problems combined.
The simplex and the dual-simplex algorithms are among the most widely used
algorithms for solving linear programs in practice. Simplex algorithms for solving
linear programs are closely related to policy iteration algorithms. Like policy iteration,
the simplex algorithm is parameterized by a pivoting rule that describes how
to proceed from one basic feasible solution in the linear program to the next. It is
a major open problem whether there is a pivoting rule that results in a (strongly)
polynomial time algorithm for solving linear programs.
We contribute to both the policy iteration and the simplex algorithm by proving
exponential lower bounds for several improvement resp. pivoting rules. For every
considered improvement rule, we start by building 2-player parity games on which
the respective policy iteration algorithm performs an exponential number of iterations.
We then transform these 2-player games into 1-player Markov decision processes
ii
which correspond almost immediately to concrete linear programs on which the
respective simplex algorithm requires the same number of iterations. Additionally,
we show how to transfer the lower bound results to more expressive game classes
like payoff and turn-based stochastic games.
Particularly, we prove exponential lower bounds for the deterministic switch
all and switch best improvement rules for solving games, for which no non-trivial
lower bounds have been known since the introduction of Howardâs policy iteration
algorithm in 1960. Moreover, we prove exponential lower bounds for the two most
natural and most studied randomized pivoting rules suggested to date, namely the random
facet and random edge rules for solving games and linear programs, for which
no non-trivial lower bounds have been known for several decades. Furthermore, we
prove an exponential lower bound for the switch half randomized improvement rule
for solving games, which is considered to be the most important multi-switching
randomized rule. Finally, we prove an exponential lower bound for the most natural
and famous history-based pivoting rule due to Zadeh for solving games and linear
programs, which has been an open problem for thirty years.
Last but not least, we prove exponential lower bounds for two other classes of
algorithms that solve parity games, namely for the model checking algorithm due to
Stevens and Stirling and for the recursive algorithm by Zielonka
Harnessing tractability in constraint satisfaction problems
The Constraint Satisfaction Problem (CSP) is a fundamental NP-complete problem with many applications in artificial intelligence. This problem has enjoyed considerable scientific attention in the past decades due to its practical usefulness and the deep theoretical questions it relates to. However, there is a wide gap between practitioners, who develop solving techniques that are efficient for industrial instances but exponential in the worst case, and theorists who design sophisticated polynomial-time algorithms for restrictions of CSP defined by certain algebraic properties. In this thesis we attempt to bridge this gap by providing polynomial-time algorithms to test for membership in a selection of major tractable classes. Even if the instance does not belong to one of these classes, we investigate the possibility of decomposing efficiently a CSP instance into tractable subproblems through the lens of parameterized complexity. Finally, we propose a general framework to adapt the concept of kernelization, central to parameterized complexity but hitherto rarely used in practice, to the context of constraint reasoning. Preliminary experiments on this last contribution show promising results