134 research outputs found
The Simplex Algorithm is NP-mighty
We propose to classify the power of algorithms by the complexity of the
problems that they can be used to solve. Instead of restricting to the problem
a particular algorithm was designed to solve explicitly, however, we include
problems that, with polynomial overhead, can be solved 'implicitly' during the
algorithm's execution. For example, we allow to solve a decision problem by
suitably transforming the input, executing the algorithm, and observing whether
a specific bit in its internal configuration ever switches during the
execution. We show that the Simplex Method, the Network Simplex Method (both
with Dantzig's original pivot rule), and the Successive Shortest Path Algorithm
are NP-mighty, that is, each of these algorithms can be used to solve any
problem in NP. This result casts a more favorable light on these algorithms'
exponential worst-case running times. Furthermore, as a consequence of our
approach, we obtain several novel hardness results. For example, for a given
input to the Simplex Algorithm, deciding whether a given variable ever enters
the basis during the algorithm's execution and determining the number of
iterations needed are both NP-hard problems. Finally, we close a long-standing
open problem in the area of network flows over time by showing that earliest
arrival flows are NP-hard to obtain
The Complexity of the Simplex Method
The simplex method is a well-studied and widely-used pivoting method for
solving linear programs. When Dantzig originally formulated the simplex method,
he gave a natural pivot rule that pivots into the basis a variable with the
most violated reduced cost. In their seminal work, Klee and Minty showed that
this pivot rule takes exponential time in the worst case. We prove two main
results on the simplex method. Firstly, we show that it is PSPACE-complete to
find the solution that is computed by the simplex method using Dantzig's pivot
rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a
specific variable to enter the basis is PSPACE-complete. We use the known
connection between Markov decision processes (MDPs) and linear programming, and
an equivalence between Dantzig's pivot rule and a natural variant of policy
iteration for average-reward MDPs. We construct MDPs and show
PSPACE-completeness results for single-switch policy iteration, which in turn
imply our main results for the simplex method
The Complexity of the k-means Method
The k-means method is a widely used technique for clustering points in Euclidean space. While it is extremely fast in practice, its worst-case running time is exponential in the number of data points. We prove that the k-means method can implicitly solve PSPACE-complete problems, providing a complexity-theoretic explanation for its worst-case running time. Our result parallels recent work on the complexity of the simplex method for linear programming
The Niceness of Unique Sink Orientations
Random Edge is the most natural randomized pivot rule for the simplex
algorithm. Considerable progress has been made recently towards fully
understanding its behavior. Back in 2001, Welzl introduced the concepts of
\emph{reachmaps} and \emph{niceness} of Unique Sink Orientations (USO), in an
effort to better understand the behavior of Random Edge. In this paper, we
initiate the systematic study of these concepts. We settle the questions that
were asked by Welzl about the niceness of (acyclic) USO. Niceness implies
natural upper bounds for Random Edge and we provide evidence that these are
tight or almost tight in many interesting cases. Moreover, we show that Random
Edge is polynomial on at least many (possibly cyclic) USO. As
a bonus, we describe a derandomization of Random Edge which achieves the same
asymptotic upper bounds with respect to niceness and discuss some algorithmic
properties of the reachmap.Comment: An extended abstract appears in the proceedings of Approx/Random 201
A unified worst case for classical simplex and policy iteration pivot rules
We construct a family of Markov decision processes for which the policy
iteration algorithm needs an exponential number of improving switches with
Dantzig's rule, with Bland's rule, and with the Largest Increase pivot rule.
This immediately translates to a family of linear programs for which the
simplex algorithm needs an exponential number of pivot steps with the same
three pivot rules. Our results yield a unified construction that simultaneously
reproduces well-known lower bounds for these classical pivot rules, and we are
able to infer that any (deterministic or randomized) combination of them cannot
avoid an exponential worst-case behavior. Regarding the policy iteration
algorithm, pivot rules typically switch multiple edges simultaneously and our
lower bound for Dantzig's rule and the Largest Increase rule, which perform
only single switches, seem novel. Regarding the simplex algorithm, the
individual lower bounds were previously obtained separately via deformed
hypercube constructions. In contrast to previous bounds for the simplex
algorithm via Markov decision processes, our rigorous analysis is reasonably
concise
The Complexity of All-switches Strategy Improvement
Strategy improvement is a widely-used and well-studied class of algorithms
for solving graph-based infinite games. These algorithms are parameterized by a
switching rule, and one of the most natural rules is "all switches" which
switches as many edges as possible in each iteration. Continuing a recent line
of work, we study all-switches strategy improvement from the perspective of
computational complexity. We consider two natural decision problems, both of
which have as input a game , a starting strategy , and an edge . The
problems are: 1.) The edge switch problem, namely, is the edge ever
switched by all-switches strategy improvement when it is started from on
game ? 2.) The optimal strategy problem, namely, is the edge used in the
final strategy that is found by strategy improvement when it is started from
on game ? We show -completeness of the edge switch
problem and optimal strategy problem for the following settings: Parity games
with the discrete strategy improvement algorithm of V\"oge and Jurdzi\'nski;
mean-payoff games with the gain-bias algorithm [14,37]; and discounted-payoff
games and simple stochastic games with their standard strategy improvement
algorithms. We also show -completeness of an analogous problem
to edge switch for the bottom-antipodal algorithm for finding the sink of an
Acyclic Unique Sink Orientation on a cube
The Niceness of Unique Sink Orientations
Random Edge is the most natural randomized pivot rule for the simplex algorithm. Considerable progress has been made recently towards fully understanding its behavior. Back in 2001, Welzl introduced the concepts of reachmaps and niceness of Unique Sink Orientations (USO), in an effort to better understand the behavior of Random Edge. In this paper, we initiate the systematic study of these concepts. We settle the questions that were asked by Welzl about the niceness of (acyclic) USO. Niceness implies natural upper bounds for Random Edge and we provide evidence that these are tight or almost tight in many interesting cases. Moreover, we show that Random Edge is polynomial on at least n^{Omega(2^n)} many (possibly cyclic) USO. As a bonus, we describe a derandomization of Random Edge which achieves the same asymptotic upper bounds with respect to niceness
The possessive relation in Sanskrit bahuvriÌhi compounds: Ellipsis or movement?
Many Sanskrit bahuvrihis involve a possessive relation whereby one of the bahuvrihi-members is the possessum and an expression not mentioned within the bahuvrihi is the corresponding possessor: e.g., ugra-putra- (RV 8.67.11), not âmighty son(s)â but âAditi having mighty sonsâ or âAditi whose sons are mightyâ. This study addresses the following research question: how is this possessive relation established in Sanskrit bahuvrihis? We consider two possible strategies. According to the first strategy, a linguistic unit which conveys the meaning âhavingâ and undergoes ellipsis combines with the bahuvrihi stem: e.g., the combination of this elided unit with ugra-putra-, which per se would convey the meaning âmighty son(s)â, yields the meaning âhaving mighty sonsâ. According to the second strategy, the possessor starts out within the phrase projected by one of the bahuvrihi-members: e.g., aÌditi- (i.e., the Sanskrit term for âAditiâ) starts out as the specifier of the phrase projected by putraÌ- in the above example; in this configuration aÌditi- is read as the possessor of putraÌ-; only subsequently will aÌditi- exit the bahuvrihi. We argue that the second strategy is superior because only it captures certain restrictions on the internal order of bahuvrihis
Computing all Wardrop Equilibria parametrized by the Flow Demand
We develop an algorithm that computes for a given undirected or directed
network with flow-dependent piece-wise linear edge cost functions all Wardrop
equilibria as a function of the flow demand. Our algorithm is based on
Katzenelson's homotopy method for electrical networks. The algorithm uses a
bijection between vertex potentials and flow excess vectors that is piecewise
linear in the potential space and where each linear segment can be interpreted
as an augmenting flow in a residual network. The algorithm iteratively
increases the excess of one or more vertex pairs until the bijection reaches a
point of non-differentiability. Then, the next linear region is chosen in a
Simplex-like pivot step and the algorithm proceeds. We first show that this
algorithm correctly computes all Wardrop equilibria in undirected
single-commodity networks along the chosen path of excess vectors. We then
adapt our algorithm to also work for discontinuous cost functions which allows
to model directed edges and/or edge capacities. Our algorithm is
output-polynomial in non-degenerate instances where the solution curve never
hits a point where the cost function of more than one edge becomes
non-differentiable. For degenerate instances we still obtain an
output-polynomial algorithm computing the linear segments of the bijection by a
convex program. The latter technique also allows to handle multiple
commodities
- âŠ