21 research outputs found
The Simplex Algorithm is NP-mighty
We propose to classify the power of algorithms by the complexity of the
problems that they can be used to solve. Instead of restricting to the problem
a particular algorithm was designed to solve explicitly, however, we include
problems that, with polynomial overhead, can be solved 'implicitly' during the
algorithm's execution. For example, we allow to solve a decision problem by
suitably transforming the input, executing the algorithm, and observing whether
a specific bit in its internal configuration ever switches during the
execution. We show that the Simplex Method, the Network Simplex Method (both
with Dantzig's original pivot rule), and the Successive Shortest Path Algorithm
are NP-mighty, that is, each of these algorithms can be used to solve any
problem in NP. This result casts a more favorable light on these algorithms'
exponential worst-case running times. Furthermore, as a consequence of our
approach, we obtain several novel hardness results. For example, for a given
input to the Simplex Algorithm, deciding whether a given variable ever enters
the basis during the algorithm's execution and determining the number of
iterations needed are both NP-hard problems. Finally, we close a long-standing
open problem in the area of network flows over time by showing that earliest
arrival flows are NP-hard to obtain
The Niceness of Unique Sink Orientations
Random Edge is the most natural randomized pivot rule for the simplex
algorithm. Considerable progress has been made recently towards fully
understanding its behavior. Back in 2001, Welzl introduced the concepts of
\emph{reachmaps} and \emph{niceness} of Unique Sink Orientations (USO), in an
effort to better understand the behavior of Random Edge. In this paper, we
initiate the systematic study of these concepts. We settle the questions that
were asked by Welzl about the niceness of (acyclic) USO. Niceness implies
natural upper bounds for Random Edge and we provide evidence that these are
tight or almost tight in many interesting cases. Moreover, we show that Random
Edge is polynomial on at least many (possibly cyclic) USO. As
a bonus, we describe a derandomization of Random Edge which achieves the same
asymptotic upper bounds with respect to niceness and discuss some algorithmic
properties of the reachmap.Comment: An extended abstract appears in the proceedings of Approx/Random 201
The Complexity of the k-means Method
The k-means method is a widely used technique for clustering points in Euclidean space. While it is extremely fast in practice, its worst-case running time is exponential in the number of data points. We prove that the k-means method can implicitly solve PSPACE-complete problems, providing a complexity-theoretic explanation for its worst-case running time. Our result parallels recent work on the complexity of the simplex method for linear programming
The Niceness of Unique Sink Orientations
Random Edge is the most natural randomized pivot rule for the simplex algorithm. Considerable progress has been made recently towards fully understanding its behavior. Back in 2001, Welzl introduced the concepts of reachmaps and niceness of Unique Sink Orientations (USO), in an effort to better understand the behavior of Random Edge. In this paper, we initiate the systematic study of these concepts. We settle the questions that were asked by Welzl about the niceness of (acyclic) USO. Niceness implies natural upper bounds for Random Edge and we provide evidence that these are tight or almost tight in many interesting cases. Moreover, we show that Random Edge is polynomial on at least n^{Omega(2^n)} many (possibly cyclic) USO. As a bonus, we describe a derandomization of Random Edge which achieves the same asymptotic upper bounds with respect to niceness
A unified worst case for classical simplex and policy iteration pivot rules
We construct a family of Markov decision processes for which the policy
iteration algorithm needs an exponential number of improving switches with
Dantzig's rule, with Bland's rule, and with the Largest Increase pivot rule.
This immediately translates to a family of linear programs for which the
simplex algorithm needs an exponential number of pivot steps with the same
three pivot rules. Our results yield a unified construction that simultaneously
reproduces well-known lower bounds for these classical pivot rules, and we are
able to infer that any (deterministic or randomized) combination of them cannot
avoid an exponential worst-case behavior. Regarding the policy iteration
algorithm, pivot rules typically switch multiple edges simultaneously and our
lower bound for Dantzig's rule and the Largest Increase rule, which perform
only single switches, seem novel. Regarding the simplex algorithm, the
individual lower bounds were previously obtained separately via deformed
hypercube constructions. In contrast to previous bounds for the simplex
algorithm via Markov decision processes, our rigorous analysis is reasonably
concise
Algorithms for flows over time with scheduling costs
Flows over time have received substantial attention from both an optimization and (more recently) a game-theoretic perspective. In this model, each arc has an associated delay for traversing the arc, and a bound on the rate of flow entering the arc; flows are time-varying. We consider a setting which is very standard within the transportation economic literature, but has received little attention from an algorithmic perspective. The flow consists of users who are able to choose their route but also their departure time, and who desire to arrive at their destination at a particular time, incurring a scheduling cost if they arrive earlier or later. The total cost of a user is then a combination of the time they spend commuting, and the scheduling cost they incur. We present a combinatorial algorithm for the natural optimization problem, that of minimizing the average total cost of all users (i.e., maximizing the social welfare). Based on this, we also show how to set tolls so that this optimal flow is induced as an equilibrium of the underlying game
Energy Minimization
The energetic state of a protein is one of the most important representative parameters of its stability. The energy of a protein can be defined as a function of its atomic coordinates. This energy function consists of several components: 1. Bond energy and angle energy, representative of the covalent bonds, bond angles. 2. Dihedral energy, due to the dihedral angles. 3. A van der Waals term (also called Leonard-Jones potential) to ensure that atoms do not have steric clashes. 4. Electrostatic energy accounting for the Coulomb’s Law m protein structure, i.e. the long-range forces between charged and partially charged atoms. All these quantitative terms have been parameterized and are collectively referred to as the ‘force-field’, for e.g. CHARMM, AMBER, AMBERJOPLS and GROMOS. The goal of energy Minimization is to find a set of coordinates representing the minimum energy conformation for the given structure. Various algorithms have been formulated by varying the use of derivatives. Three common algorithms used for this optimization are steepest descent, conjugate gradient and Newton–Raphson. Although energy Minimization is a tool to achieve the nearest local minima, it is also an indispensable tool in correcting structural anomalies, viz. bad stereo-chemistry and short contacts. An efficient optimization protocol could be devised from these methods in conjunction with a larger space exploration algorithm, e.g. molecular dynamics